I0526 10:46:53.845851 6 e2e.go:224] Starting e2e run "362666b9-9f3e-11ea-b1d1-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590490013 - Will randomize all specs Will run 201 of 2164 specs May 26 10:46:54.031: INFO: >>> kubeConfig: /root/.kube/config May 26 10:46:54.033: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 26 10:46:54.047: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 26 10:46:54.105: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 26 10:46:54.105: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 26 10:46:54.105: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 26 10:46:54.114: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 26 10:46:54.114: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 26 10:46:54.114: INFO: e2e test version: v1.13.12 May 26 10:46:54.116: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 10:46:54.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 26 10:46:54.307: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 26 10:46:54.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cv9gw' May 26 10:46:59.696: INFO: stderr: "" May 26 10:46:59.696: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 10:46:59.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cv9gw' May 26 10:46:59.810: INFO: stderr: "" May 26 10:46:59.810: INFO: stdout: "update-demo-nautilus-gb8l2 update-demo-nautilus-znhg7 " May 26 10:46:59.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb8l2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cv9gw' May 26 10:46:59.925: INFO: stderr: "" May 26 10:46:59.925: INFO: stdout: "" May 26 10:46:59.925: INFO: update-demo-nautilus-gb8l2 is created but not running May 26 10:47:04.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cv9gw' May 26 10:47:05.024: INFO: stderr: "" May 26 10:47:05.024: INFO: stdout: "update-demo-nautilus-gb8l2 update-demo-nautilus-znhg7 " May 26 10:47:05.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb8l2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cv9gw' May 26 10:47:05.123: INFO: stderr: "" May 26 10:47:05.123: INFO: stdout: "true" May 26 10:47:05.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb8l2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cv9gw' May 26 10:47:05.213: INFO: stderr: "" May 26 10:47:05.213: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 10:47:05.213: INFO: validating pod update-demo-nautilus-gb8l2 May 26 10:47:05.223: INFO: got data: { "image": "nautilus.jpg" } May 26 10:47:05.223: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 10:47:05.224: INFO: update-demo-nautilus-gb8l2 is verified up and running May 26 10:47:05.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znhg7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cv9gw' May 26 10:47:05.341: INFO: stderr: "" May 26 10:47:05.341: INFO: stdout: "true" May 26 10:47:05.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znhg7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cv9gw' May 26 10:47:05.447: INFO: stderr: "" May 26 10:47:05.447: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 10:47:05.447: INFO: validating pod update-demo-nautilus-znhg7 May 26 10:47:05.495: INFO: got data: { "image": "nautilus.jpg" } May 26 10:47:05.495: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 10:47:05.495: INFO: update-demo-nautilus-znhg7 is verified up and running STEP: using delete to clean up resources May 26 10:47:05.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cv9gw' May 26 10:47:05.612: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 10:47:05.612: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 26 10:47:05.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-cv9gw' May 26 10:47:05.729: INFO: stderr: "No resources found.\n" May 26 10:47:05.729: INFO: stdout: "" May 26 10:47:05.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-cv9gw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 26 10:47:05.836: INFO: stderr: "" May 26 10:47:05.836: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 10:47:05.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cv9gw" for this suite. May 26 10:47:11.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 10:47:11.880: INFO: namespace: e2e-tests-kubectl-cv9gw, resource: bindings, ignored listing per whitelist May 26 10:47:11.944: INFO: namespace e2e-tests-kubectl-cv9gw deletion completed in 6.10242433s • [SLOW TEST:17.828 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 10:47:11.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vrdmn [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 26 10:47:15.344: INFO: Found 0 stateful pods, waiting for 3 May 26 10:47:25.363: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 10:47:25.363: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 10:47:25.363: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 26 10:47:35.349: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 10:47:35.349: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 10:47:35.349: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 26 10:47:35.375: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 26 10:47:45.413: INFO: Updating stateful set ss2 May 26 10:47:45.440: INFO: Waiting for Pod e2e-tests-statefulset-vrdmn/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 26 10:47:56.400: INFO: Found 2 stateful pods, waiting for 3 May 26 10:48:06.406: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 10:48:06.406: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 10:48:06.406: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 26 10:48:06.431: INFO: Updating stateful set ss2 May 26 10:48:06.440: INFO: Waiting for Pod e2e-tests-statefulset-vrdmn/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 10:48:16.467: INFO: Updating stateful set ss2 May 26 10:48:16.506: INFO: Waiting for StatefulSet e2e-tests-statefulset-vrdmn/ss2 to complete update May 26 10:48:16.506: INFO: Waiting for Pod e2e-tests-statefulset-vrdmn/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 26 10:48:26.514: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vrdmn May 26 10:48:26.517: INFO: Scaling statefulset ss2 to 0 May 26 10:48:46.538: INFO: Waiting for statefulset status.replicas updated to 0 May 26 10:48:46.541: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 10:48:46.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vrdmn" for this suite. May 26 10:48:52.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 10:48:52.671: INFO: namespace: e2e-tests-statefulset-vrdmn, resource: bindings, ignored listing per whitelist May 26 10:48:52.728: INFO: namespace e2e-tests-statefulset-vrdmn deletion completed in 6.096752208s • [SLOW TEST:100.784 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 10:48:52.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 26 10:48:52.877: INFO: Waiting up to 5m0s for pod "var-expansion-7d640d16-9f3e-11ea-b1d1-0242ac110018" in namespace "e2e-tests-var-expansion-fsdfr" to be "success or failure" May 26 10:48:52.932: INFO: Pod "var-expansion-7d640d16-9f3e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 54.985698ms May 26 10:48:54.936: INFO: Pod "var-expansion-7d640d16-9f3e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058780264s May 26 10:48:56.969: INFO: Pod "var-expansion-7d640d16-9f3e-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09137193s STEP: Saw pod success May 26 10:48:56.969: INFO: Pod "var-expansion-7d640d16-9f3e-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 10:48:56.972: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-7d640d16-9f3e-11ea-b1d1-0242ac110018 container dapi-container: STEP: delete the pod May 26 10:48:56.996: INFO: Waiting for pod var-expansion-7d640d16-9f3e-11ea-b1d1-0242ac110018 to disappear May 26 10:48:57.006: INFO: Pod var-expansion-7d640d16-9f3e-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 10:48:57.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-fsdfr" for this suite. May 26 10:49:03.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 10:49:03.086: INFO: namespace: e2e-tests-var-expansion-fsdfr, resource: bindings, ignored listing per whitelist May 26 10:49:03.095: INFO: namespace e2e-tests-var-expansion-fsdfr deletion completed in 6.085818122s • [SLOW TEST:10.366 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 10:49:03.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 10:49:03.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-4p4km" for this suite. May 26 10:49:09.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 10:49:09.408: INFO: namespace: e2e-tests-kubelet-test-4p4km, resource: bindings, ignored listing per whitelist May 26 10:49:09.420: INFO: namespace e2e-tests-kubelet-test-4p4km deletion completed in 6.076845428s • [SLOW TEST:6.324 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 10:49:09.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 10:49:11.129: INFO: Waiting up to 5m0s for pod "downwardapi-volume-883e5840-9f3e-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-zvtsx" to be "success or failure" May 26 10:49:11.162: INFO: Pod "downwardapi-volume-883e5840-9f3e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.32365ms May 26 10:49:13.165: INFO: Pod "downwardapi-volume-883e5840-9f3e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036212527s May 26 10:49:15.442: INFO: Pod "downwardapi-volume-883e5840-9f3e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312654868s May 26 10:49:17.446: INFO: Pod "downwardapi-volume-883e5840-9f3e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.316675287s May 26 10:49:19.448: INFO: Pod "downwardapi-volume-883e5840-9f3e-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.319030635s STEP: Saw pod success May 26 10:49:19.448: INFO: Pod "downwardapi-volume-883e5840-9f3e-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 10:49:19.450: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-883e5840-9f3e-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 10:49:19.521: INFO: Waiting for pod downwardapi-volume-883e5840-9f3e-11ea-b1d1-0242ac110018 to disappear May 26 10:49:19.532: INFO: Pod downwardapi-volume-883e5840-9f3e-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 10:49:19.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zvtsx" for this suite. May 26 10:49:25.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 10:49:25.687: INFO: namespace: e2e-tests-projected-zvtsx, resource: bindings, ignored listing per whitelist May 26 10:49:25.702: INFO: namespace e2e-tests-projected-zvtsx deletion completed in 6.166890429s • [SLOW TEST:16.282 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 10:49:25.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 26 10:49:44.361: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 10:50:26.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-cf8k4" for this suite. May 26 10:50:42.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 10:50:42.375: INFO: namespace: e2e-tests-namespaces-cf8k4, resource: bindings, ignored listing per whitelist May 26 10:50:42.418: INFO: namespace e2e-tests-namespaces-cf8k4 deletion completed in 16.100077266s STEP: Destroying namespace "e2e-tests-nsdeletetest-h4nj8" for this suite. May 26 10:50:42.419: INFO: Namespace e2e-tests-nsdeletetest-h4nj8 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-rtl8t" for this suite. May 26 10:50:54.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 10:50:54.854: INFO: namespace: e2e-tests-nsdeletetest-rtl8t, resource: bindings, ignored listing per whitelist May 26 10:50:54.855: INFO: namespace e2e-tests-nsdeletetest-rtl8t deletion completed in 12.436163638s • [SLOW TEST:89.153 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 10:50:54.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod May 26 10:54:08.032: INFO: Unexpected error occurred: the server was unable to return a response in the time allotted, but may still be processing the request (get pods test-host-network-pod) [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 STEP: Collecting events from namespace "e2e-tests-e2e-kubelet-etc-hosts-wpjml". STEP: Found 12 events. May 26 10:54:11.650: INFO: At 2020-05-26 10:50:55 +0000 UTC - event for test-pod: {default-scheduler } Scheduled: Successfully assigned e2e-tests-e2e-kubelet-etc-hosts-wpjml/test-pod to hunter-worker2 May 26 10:54:11.650: INFO: At 2020-05-26 10:50:59 +0000 UTC - event for test-pod: {kubelet hunter-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec:1.1" already present on machine May 26 10:54:11.650: INFO: At 2020-05-26 10:52:34 +0000 UTC - event for test-pod: {kubelet hunter-worker2} Created: Created container May 26 10:54:11.650: INFO: At 2020-05-26 10:52:35 +0000 UTC - event for test-pod: {kubelet hunter-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec:1.1" already present on machine May 26 10:54:11.650: INFO: At 2020-05-26 10:52:35 +0000 UTC - event for test-pod: {kubelet hunter-worker2} Started: Started container May 26 10:54:11.650: INFO: At 2020-05-26 10:52:40 +0000 UTC - event for test-pod: {kubelet hunter-worker2} Created: Created container May 26 10:54:11.650: INFO: At 2020-05-26 10:52:40 +0000 UTC - event for test-pod: {kubelet hunter-worker2} Started: Started container May 26 10:54:11.650: INFO: At 2020-05-26 10:52:40 +0000 UTC - event for test-pod: {kubelet hunter-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec:1.1" already present on machine May 26 10:54:11.650: INFO: At 2020-05-26 10:52:50 +0000 UTC - event for test-pod: {kubelet hunter-worker2} Created: Created container May 26 10:54:11.650: INFO: At 2020-05-26 10:52:51 +0000 UTC - event for test-pod: {kubelet hunter-worker2} Started: Started container May 26 10:54:11.650: INFO: At 2020-05-26 10:52:54 +0000 UTC - event for test-host-network-pod: {default-scheduler } Scheduled: Successfully assigned e2e-tests-e2e-kubelet-etc-hosts-wpjml/test-host-network-pod to hunter-worker2 May 26 10:54:11.650: INFO: At 2020-05-26 10:52:55 +0000 UTC - event for test-host-network-pod: {kubelet hunter-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec:1.1" already present on machine May 26 10:54:37.193: INFO: unable to fetch node list: rpc error: code = Internal desc = transport: received the unexpected content-type "text/plain; charset=utf-8" May 26 10:54:37.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-wpjml" for this suite. May 26 10:58:59.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 10:58:59.549: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-wpjml, resource: bindings, ignored listing per whitelist May 26 10:59:00.382: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-wpjml deletion completed in 3m37.434545134s • Failure [485.526 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Expected error: <*errors.StatusError | 0xc00166d3b0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "the server was unable to return a response in the time allotted, but may still be processing the request (get pods test-host-network-pod)", Reason: "Timeout", Details: { Name: "test-host-network-pod", Group: "", Kind: "pods", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "{\"metadata\":{},\"status\":\"Failure\",\"message\":\"Timeout: request did not complete within 1m0s\",\"reason\":\"Timeout\",\"details\":{},\"code\":504}", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 504, }, } the server was unable to return a response in the time allotted, but may still be processing the request (get pods test-host-network-pod) not to have occurred /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 10:59:00.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 26 10:59:00.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vk5t2' May 26 10:59:11.188: INFO: stderr: "" May 26 10:59:11.188: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 26 10:59:46.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vk5t2 -o json' May 26 10:59:46.321: INFO: stderr: "" May 26 10:59:46.321: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-26T10:59:11Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-vk5t2\",\n \"resourceVersion\": \"12604100\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-vk5t2/pods/e2e-test-nginx-pod\",\n \"uid\": \"ededae60-9f3f-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-b65h4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-b65h4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-b65h4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T10:59:11Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T10:59:41Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T10:59:41Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T10:59:11Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d0a61d206139df735f3f54173c5b3596d3964b6f2c6b254fc070104ad2260ec8\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-26T10:59:40Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.57\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-26T10:59:11Z\"\n }\n}\n" STEP: replace the image in the pod May 26 10:59:46.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-vk5t2' May 26 10:59:46.610: INFO: stderr: "" May 26 10:59:46.610: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 26 10:59:46.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vk5t2' May 26 11:02:32.184: INFO: stderr: "" May 26 11:02:32.184: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:02:32.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vk5t2" for this suite. May 26 11:02:42.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:02:42.378: INFO: namespace: e2e-tests-kubectl-vk5t2, resource: bindings, ignored listing per whitelist May 26 11:02:42.399: INFO: namespace e2e-tests-kubectl-vk5t2 deletion completed in 10.154334862s • [SLOW TEST:222.017 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:02:42.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 26 11:02:44.455: INFO: Waiting up to 5m0s for pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-4wsq8" to be "success or failure" May 26 11:02:47.040: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.584212688s May 26 11:02:49.238: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.782435727s May 26 11:02:54.961: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.506180274s May 26 11:02:56.964: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.508836771s May 26 11:02:58.968: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.512365111s May 26 11:03:00.971: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.516038551s May 26 11:03:03.142: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.687099053s May 26 11:03:08.123: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.667642087s May 26 11:03:16.640: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.184720994s May 26 11:03:20.406: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 35.950915543s May 26 11:03:22.436: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.981032117s May 26 11:03:24.439: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 39.983817305s May 26 11:03:27.083: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 42.627737624s May 26 11:03:29.087: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 44.631482878s May 26 11:03:31.091: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 46.635409234s May 26 11:03:34.000: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 49.545030811s May 26 11:03:37.384: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 52.92872484s May 26 11:03:39.387: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 54.931730986s May 26 11:03:42.607: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 58.152025639s May 26 11:03:45.329: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.873854366s May 26 11:03:47.332: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.876808137s May 26 11:03:49.335: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.879621494s May 26 11:03:51.339: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m6.883231179s May 26 11:03:53.341: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.886033622s May 26 11:03:55.750: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m11.294661371s May 26 11:03:58.166: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m13.710302476s May 26 11:04:00.239: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m15.783203557s May 26 11:04:02.509: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m18.053686424s May 26 11:04:04.700: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m20.244790939s May 26 11:04:06.849: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m22.394142603s May 26 11:04:08.868: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m24.412334878s STEP: Saw pod success May 26 11:04:08.868: INFO: Pod "pod-6cee5437-9f40-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:04:08.870: INFO: Trying to get logs from node hunter-worker2 pod pod-6cee5437-9f40-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 11:04:11.219: INFO: Waiting for pod pod-6cee5437-9f40-11ea-b1d1-0242ac110018 to disappear May 26 11:04:11.766: INFO: Pod pod-6cee5437-9f40-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:04:11.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4wsq8" for this suite. May 26 11:09:06.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:09:06.590: INFO: namespace: e2e-tests-emptydir-4wsq8, resource: bindings, ignored listing per whitelist May 26 11:09:06.625: INFO: namespace e2e-tests-emptydir-4wsq8 deletion completed in 4m54.854847175s • [SLOW TEST:384.227 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:09:06.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 26 11:09:06.861: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:06.864: INFO: Number of nodes with available pods: 0 May 26 11:09:06.864: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:07.950: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:07.952: INFO: Number of nodes with available pods: 0 May 26 11:09:07.952: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:08.903: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:08.905: INFO: Number of nodes with available pods: 0 May 26 11:09:08.905: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:09.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:09.910: INFO: Number of nodes with available pods: 0 May 26 11:09:09.910: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:10.866: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:10.869: INFO: Number of nodes with available pods: 0 May 26 11:09:10.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:11.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:11.868: INFO: Number of nodes with available pods: 0 May 26 11:09:11.868: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:12.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:12.869: INFO: Number of nodes with available pods: 0 May 26 11:09:12.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:13.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:13.869: INFO: Number of nodes with available pods: 0 May 26 11:09:13.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:14.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:14.870: INFO: Number of nodes with available pods: 0 May 26 11:09:14.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:15.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:15.871: INFO: Number of nodes with available pods: 0 May 26 11:09:15.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:19.580: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:20.839: INFO: Number of nodes with available pods: 0 May 26 11:09:20.839: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:20.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:20.870: INFO: Number of nodes with available pods: 0 May 26 11:09:20.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:21.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:21.870: INFO: Number of nodes with available pods: 0 May 26 11:09:21.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:22.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:22.869: INFO: Number of nodes with available pods: 0 May 26 11:09:22.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:23.884: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:24.005: INFO: Number of nodes with available pods: 0 May 26 11:09:24.005: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:25.656: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:25.658: INFO: Number of nodes with available pods: 0 May 26 11:09:25.658: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:25.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:25.870: INFO: Number of nodes with available pods: 0 May 26 11:09:25.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:26.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:26.869: INFO: Number of nodes with available pods: 0 May 26 11:09:26.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:28.264: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:28.315: INFO: Number of nodes with available pods: 0 May 26 11:09:28.315: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:30.647: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:30.650: INFO: Number of nodes with available pods: 0 May 26 11:09:30.650: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:32.638: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:32.641: INFO: Number of nodes with available pods: 0 May 26 11:09:32.641: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:33.065: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:33.111: INFO: Number of nodes with available pods: 0 May 26 11:09:33.111: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:33.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:33.870: INFO: Number of nodes with available pods: 0 May 26 11:09:33.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:36.740: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:38.329: INFO: Number of nodes with available pods: 0 May 26 11:09:38.329: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:42.318: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:44.482: INFO: Number of nodes with available pods: 0 May 26 11:09:44.482: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:44.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:44.870: INFO: Number of nodes with available pods: 0 May 26 11:09:44.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:45.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:45.868: INFO: Number of nodes with available pods: 0 May 26 11:09:45.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:48.791: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:48.993: INFO: Number of nodes with available pods: 0 May 26 11:09:48.993: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:49.869: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:49.872: INFO: Number of nodes with available pods: 0 May 26 11:09:49.872: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:51.053: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:52.612: INFO: Number of nodes with available pods: 0 May 26 11:09:52.612: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:54.593: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:54.597: INFO: Number of nodes with available pods: 0 May 26 11:09:54.597: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:55.722: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:55.724: INFO: Number of nodes with available pods: 0 May 26 11:09:55.724: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:56.377: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:56.380: INFO: Number of nodes with available pods: 0 May 26 11:09:56.380: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:56.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:56.870: INFO: Number of nodes with available pods: 0 May 26 11:09:56.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:09:58.127: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:09:58.843: INFO: Number of nodes with available pods: 0 May 26 11:09:58.843: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:00.668: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:00.696: INFO: Number of nodes with available pods: 0 May 26 11:10:00.696: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:01.274: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:01.276: INFO: Number of nodes with available pods: 0 May 26 11:10:01.276: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:01.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:01.870: INFO: Number of nodes with available pods: 0 May 26 11:10:01.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:04.885: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:05.126: INFO: Number of nodes with available pods: 0 May 26 11:10:05.126: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:05.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:05.870: INFO: Number of nodes with available pods: 0 May 26 11:10:05.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:06.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:06.872: INFO: Number of nodes with available pods: 0 May 26 11:10:06.872: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:08.174: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:08.178: INFO: Number of nodes with available pods: 0 May 26 11:10:08.178: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:09.095: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:09.097: INFO: Number of nodes with available pods: 0 May 26 11:10:09.097: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:09.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:09.871: INFO: Number of nodes with available pods: 0 May 26 11:10:09.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:11.818: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:11.820: INFO: Number of nodes with available pods: 0 May 26 11:10:11.820: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:12.049: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:12.103: INFO: Number of nodes with available pods: 0 May 26 11:10:12.103: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:14.434: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:14.816: INFO: Number of nodes with available pods: 0 May 26 11:10:14.816: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:15.020: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:15.298: INFO: Number of nodes with available pods: 0 May 26 11:10:15.298: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:15.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:15.870: INFO: Number of nodes with available pods: 0 May 26 11:10:15.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:17.352: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:19.800: INFO: Number of nodes with available pods: 0 May 26 11:10:19.800: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:19.976: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:19.978: INFO: Number of nodes with available pods: 0 May 26 11:10:19.978: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:20.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:20.869: INFO: Number of nodes with available pods: 0 May 26 11:10:20.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:22.012: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:23.065: INFO: Number of nodes with available pods: 0 May 26 11:10:23.065: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:24.205: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:24.515: INFO: Number of nodes with available pods: 0 May 26 11:10:24.515: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:26.124: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:27.096: INFO: Number of nodes with available pods: 0 May 26 11:10:27.096: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:27.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:27.870: INFO: Number of nodes with available pods: 0 May 26 11:10:27.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:29.006: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:29.010: INFO: Number of nodes with available pods: 0 May 26 11:10:29.010: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:31.419: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:31.422: INFO: Number of nodes with available pods: 0 May 26 11:10:31.422: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:48.973: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:48.979: INFO: Number of nodes with available pods: 0 May 26 11:10:48.979: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:50.893: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:51.245: INFO: Number of nodes with available pods: 0 May 26 11:10:51.245: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:53.189: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:53.191: INFO: Number of nodes with available pods: 0 May 26 11:10:53.191: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:53.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:53.872: INFO: Number of nodes with available pods: 0 May 26 11:10:53.872: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:54.869: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:54.871: INFO: Number of nodes with available pods: 0 May 26 11:10:54.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:56.335: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:56.337: INFO: Number of nodes with available pods: 0 May 26 11:10:56.337: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:57.496: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:57.528: INFO: Number of nodes with available pods: 0 May 26 11:10:57.528: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:59.309: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:59.533: INFO: Number of nodes with available pods: 0 May 26 11:10:59.533: INFO: Node hunter-worker is running more than one daemon pod May 26 11:10:59.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:10:59.870: INFO: Number of nodes with available pods: 0 May 26 11:10:59.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:00.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:00.871: INFO: Number of nodes with available pods: 0 May 26 11:11:00.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:02.306: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:03.738: INFO: Number of nodes with available pods: 0 May 26 11:11:03.738: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:03.947: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:03.949: INFO: Number of nodes with available pods: 0 May 26 11:11:03.949: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:04.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:04.871: INFO: Number of nodes with available pods: 0 May 26 11:11:04.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:06.426: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:10.082: INFO: Number of nodes with available pods: 0 May 26 11:11:10.082: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:10.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:10.871: INFO: Number of nodes with available pods: 0 May 26 11:11:10.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:11.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:11.870: INFO: Number of nodes with available pods: 0 May 26 11:11:11.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:22.277: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:26.749: INFO: Number of nodes with available pods: 0 May 26 11:11:26.749: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:27.522: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:27.525: INFO: Number of nodes with available pods: 0 May 26 11:11:27.525: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:27.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:27.869: INFO: Number of nodes with available pods: 0 May 26 11:11:27.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:30.774: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:32.000: INFO: Number of nodes with available pods: 0 May 26 11:11:32.000: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:32.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:32.871: INFO: Number of nodes with available pods: 0 May 26 11:11:32.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:33.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:33.870: INFO: Number of nodes with available pods: 0 May 26 11:11:33.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:35.176: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:35.178: INFO: Number of nodes with available pods: 0 May 26 11:11:35.178: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:35.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:35.870: INFO: Number of nodes with available pods: 0 May 26 11:11:35.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:36.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:36.870: INFO: Number of nodes with available pods: 0 May 26 11:11:36.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:37.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:37.870: INFO: Number of nodes with available pods: 0 May 26 11:11:37.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:38.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:38.870: INFO: Number of nodes with available pods: 0 May 26 11:11:38.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:39.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:39.871: INFO: Number of nodes with available pods: 0 May 26 11:11:39.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:43.153: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:44.369: INFO: Number of nodes with available pods: 0 May 26 11:11:44.369: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:44.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:44.870: INFO: Number of nodes with available pods: 0 May 26 11:11:44.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:45.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:45.871: INFO: Number of nodes with available pods: 0 May 26 11:11:45.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:46.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:46.871: INFO: Number of nodes with available pods: 0 May 26 11:11:46.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:47.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:47.869: INFO: Number of nodes with available pods: 0 May 26 11:11:47.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:48.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:48.869: INFO: Number of nodes with available pods: 0 May 26 11:11:48.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:49.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:49.870: INFO: Number of nodes with available pods: 0 May 26 11:11:49.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:51.690: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:51.692: INFO: Number of nodes with available pods: 0 May 26 11:11:51.692: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:51.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:51.870: INFO: Number of nodes with available pods: 0 May 26 11:11:51.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:52.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:52.871: INFO: Number of nodes with available pods: 0 May 26 11:11:52.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:55.545: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:56.714: INFO: Number of nodes with available pods: 0 May 26 11:11:56.714: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:56.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:56.871: INFO: Number of nodes with available pods: 0 May 26 11:11:56.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:57.918: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:57.921: INFO: Number of nodes with available pods: 0 May 26 11:11:57.921: INFO: Node hunter-worker is running more than one daemon pod May 26 11:11:58.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:11:58.871: INFO: Number of nodes with available pods: 0 May 26 11:11:58.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:10.763: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:15.008: INFO: Number of nodes with available pods: 0 May 26 11:12:15.008: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:20.098: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:22.960: INFO: Number of nodes with available pods: 0 May 26 11:12:22.960: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:27.058: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:28.033: INFO: Number of nodes with available pods: 0 May 26 11:12:28.033: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:28.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:28.870: INFO: Number of nodes with available pods: 0 May 26 11:12:28.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:35.983: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:39.296: INFO: Number of nodes with available pods: 0 May 26 11:12:39.296: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:39.924: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:39.927: INFO: Number of nodes with available pods: 0 May 26 11:12:39.927: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:40.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:40.872: INFO: Number of nodes with available pods: 0 May 26 11:12:40.872: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:41.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:41.871: INFO: Number of nodes with available pods: 0 May 26 11:12:41.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:42.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:42.872: INFO: Number of nodes with available pods: 0 May 26 11:12:42.872: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:43.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:43.870: INFO: Number of nodes with available pods: 0 May 26 11:12:43.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:44.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:44.869: INFO: Number of nodes with available pods: 0 May 26 11:12:44.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:48.451: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:48.454: INFO: Number of nodes with available pods: 0 May 26 11:12:48.454: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:48.875: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:48.877: INFO: Number of nodes with available pods: 0 May 26 11:12:48.877: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:49.871: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:49.873: INFO: Number of nodes with available pods: 0 May 26 11:12:49.873: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:52.498: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:53.182: INFO: Number of nodes with available pods: 0 May 26 11:12:53.182: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:53.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:53.871: INFO: Number of nodes with available pods: 0 May 26 11:12:53.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:12:54.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:12:54.870: INFO: Number of nodes with available pods: 0 May 26 11:12:54.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:08.704: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:09.272: INFO: Number of nodes with available pods: 0 May 26 11:13:09.272: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:11.909: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:12.734: INFO: Number of nodes with available pods: 0 May 26 11:13:12.734: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:13.664: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:13.666: INFO: Number of nodes with available pods: 0 May 26 11:13:13.666: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:13.866: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:13.869: INFO: Number of nodes with available pods: 0 May 26 11:13:13.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:14.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:14.871: INFO: Number of nodes with available pods: 0 May 26 11:13:14.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:15.869: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:15.872: INFO: Number of nodes with available pods: 0 May 26 11:13:15.872: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:16.943: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:17.074: INFO: Number of nodes with available pods: 0 May 26 11:13:17.074: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:17.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:17.871: INFO: Number of nodes with available pods: 0 May 26 11:13:17.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:22.903: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:24.492: INFO: Number of nodes with available pods: 0 May 26 11:13:24.492: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:24.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:24.960: INFO: Number of nodes with available pods: 0 May 26 11:13:24.960: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:25.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:25.871: INFO: Number of nodes with available pods: 0 May 26 11:13:25.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:26.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:26.869: INFO: Number of nodes with available pods: 0 May 26 11:13:26.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:28.081: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:28.084: INFO: Number of nodes with available pods: 0 May 26 11:13:28.084: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:29.350: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:29.353: INFO: Number of nodes with available pods: 0 May 26 11:13:29.353: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:32.770: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:33.954: INFO: Number of nodes with available pods: 0 May 26 11:13:33.954: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:35.638: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:35.641: INFO: Number of nodes with available pods: 0 May 26 11:13:35.641: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:36.421: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:36.423: INFO: Number of nodes with available pods: 0 May 26 11:13:36.423: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:37.458: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:37.460: INFO: Number of nodes with available pods: 0 May 26 11:13:37.460: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:37.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:37.870: INFO: Number of nodes with available pods: 0 May 26 11:13:37.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:42.141: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:42.144: INFO: Number of nodes with available pods: 0 May 26 11:13:42.144: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:42.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:42.872: INFO: Number of nodes with available pods: 0 May 26 11:13:42.872: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:46.034: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:49.039: INFO: Number of nodes with available pods: 0 May 26 11:13:49.039: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:53.219: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:54.606: INFO: Number of nodes with available pods: 0 May 26 11:13:54.606: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:54.868: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:54.871: INFO: Number of nodes with available pods: 0 May 26 11:13:54.871: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:57.691: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:57.693: INFO: Number of nodes with available pods: 0 May 26 11:13:57.693: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:57.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:13:57.870: INFO: Number of nodes with available pods: 0 May 26 11:13:57.870: INFO: Node hunter-worker is running more than one daemon pod May 26 11:13:59.248: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:14:00.045: INFO: Number of nodes with available pods: 0 May 26 11:14:00.045: INFO: Node hunter-worker is running more than one daemon pod May 26 11:14:00.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:14:00.869: INFO: Number of nodes with available pods: 0 May 26 11:14:00.869: INFO: Node hunter-worker is running more than one daemon pod May 26 11:14:01.909: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:14:03.147: INFO: Number of nodes with available pods: 0 May 26 11:14:03.147: INFO: Node hunter-worker is running more than one daemon pod May 26 11:14:05.012: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:14:05.655: INFO: Number of nodes with available pods: 2 May 26 11:14:05.655: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 26 11:14:06.069: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:14:06.072: INFO: Number of nodes with available pods: 2 May 26 11:14:06.072: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bvnr2, will wait for the garbage collector to delete the pods May 26 11:19:06.140: INFO: Deleting DaemonSet.extensions daemon-set took: 4.843769ms May 26 11:19:06.240: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.171911ms May 26 11:20:00.243: INFO: Number of nodes with available pods: 0 May 26 11:20:00.243: INFO: Number of running nodes: 0, number of available pods: 0 May 26 11:20:00.246: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bvnr2/daemonsets","resourceVersion":"12605467"},"items":null} May 26 11:20:00.248: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bvnr2/pods","resourceVersion":"12605467"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 STEP: Collecting events from namespace "e2e-tests-daemonsets-bvnr2". STEP: Found 16 events. May 26 11:20:00.256: INFO: At 2020-05-26 11:09:06 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-mn7dr May 26 11:20:00.256: INFO: At 2020-05-26 11:09:06 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-9stsf May 26 11:20:00.256: INFO: At 2020-05-26 11:09:06 +0000 UTC - event for daemon-set-9stsf: {default-scheduler } Scheduled: Successfully assigned e2e-tests-daemonsets-bvnr2/daemon-set-9stsf to hunter-worker May 26 11:20:00.256: INFO: At 2020-05-26 11:09:06 +0000 UTC - event for daemon-set-mn7dr: {default-scheduler } Scheduled: Successfully assigned e2e-tests-daemonsets-bvnr2/daemon-set-mn7dr to hunter-worker2 May 26 11:20:00.256: INFO: At 2020-05-26 11:09:58 +0000 UTC - event for daemon-set-9stsf: {kubelet hunter-worker} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine May 26 11:20:00.256: INFO: At 2020-05-26 11:10:02 +0000 UTC - event for daemon-set-mn7dr: {kubelet hunter-worker2} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine May 26 11:20:00.256: INFO: At 2020-05-26 11:11:58 +0000 UTC - event for daemon-set-9stsf: {kubelet hunter-worker} Failed: Error: context deadline exceeded May 26 11:20:00.256: INFO: At 2020-05-26 11:11:59 +0000 UTC - event for daemon-set-9stsf: {kubelet hunter-worker} Failed: Error: failed to reserve container name "app_daemon-set-9stsf_e2e-tests-daemonsets-bvnr2_50fc8098-9f41-11ea-99e8-0242ac110002_0": name "app_daemon-set-9stsf_e2e-tests-daemonsets-bvnr2_50fc8098-9f41-11ea-99e8-0242ac110002_0" is reserved for "06218bc73ae7e824adfdcc37879f13f412343b95d7aa01716697458e14a28627" May 26 11:20:00.256: INFO: At 2020-05-26 11:12:02 +0000 UTC - event for daemon-set-mn7dr: {kubelet hunter-worker2} Failed: Error: context deadline exceeded May 26 11:20:00.256: INFO: At 2020-05-26 11:12:03 +0000 UTC - event for daemon-set-mn7dr: {kubelet hunter-worker2} Failed: Error: failed to reserve container name "app_daemon-set-mn7dr_e2e-tests-daemonsets-bvnr2_50fcf30c-9f41-11ea-99e8-0242ac110002_0": name "app_daemon-set-mn7dr_e2e-tests-daemonsets-bvnr2_50fcf30c-9f41-11ea-99e8-0242ac110002_0" is reserved for "1f07f1e5aeba50e98f3593df5c12a53659a048bbb7341c6803fd8ff55dc85e2c" May 26 11:20:00.256: INFO: At 2020-05-26 11:13:57 +0000 UTC - event for daemon-set-9stsf: {kubelet hunter-worker} Created: Created container May 26 11:20:00.256: INFO: At 2020-05-26 11:13:57 +0000 UTC - event for daemon-set-mn7dr: {kubelet hunter-worker2} Created: Created container May 26 11:20:00.256: INFO: At 2020-05-26 11:14:01 +0000 UTC - event for daemon-set-9stsf: {kubelet hunter-worker} Started: Started container May 26 11:20:00.256: INFO: At 2020-05-26 11:14:01 +0000 UTC - event for daemon-set-mn7dr: {kubelet hunter-worker2} Started: Started container May 26 11:20:00.256: INFO: At 2020-05-26 11:19:21 +0000 UTC - event for daemon-set-9stsf: {kubelet hunter-worker} Killing: Killing container with id containerd://app:Need to kill Pod May 26 11:20:00.256: INFO: At 2020-05-26 11:19:21 +0000 UTC - event for daemon-set-mn7dr: {kubelet hunter-worker2} Killing: Killing container with id containerd://app:Need to kill Pod May 26 11:20:00.262: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:20:00.262: INFO: coredns-54ff9cd656-4h7lb hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC }] May 26 11:20:00.262: INFO: coredns-54ff9cd656-8vrkk hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC }] May 26 11:20:00.262: INFO: etcd-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 26 11:20:00.262: INFO: kindnet-54h7m hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 26 11:20:00.262: INFO: kindnet-l2xm6 hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC }] May 26 11:20:00.262: INFO: kindnet-mtqrs hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 26 11:20:00.262: INFO: kube-apiserver-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 26 11:20:00.262: INFO: kube-controller-manager-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 10:56:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 10:56:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 26 11:20:00.262: INFO: kube-proxy-mmppc hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC }] May 26 11:20:00.263: INFO: kube-proxy-s52ll hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 26 11:20:00.263: INFO: kube-proxy-szbng hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 26 11:20:00.263: INFO: kube-scheduler-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:15:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:15:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 26 11:20:00.263: INFO: local-path-provisioner-77cfdd744c-q47vg hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:18:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:18:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:41 +0000 UTC }] May 26 11:20:00.263: INFO: May 26 11:20:00.266: INFO: Logging node info for node hunter-control-plane May 26 11:20:00.268: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-control-plane,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-control-plane,UID:faa448b1-66e9-11ea-99e8-0242ac110002,ResourceVersion:12605456,Generation:0,CreationTimestamp:2020-03-15 18:22:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-control-plane,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-26 11:19:49 +0000 UTC 2020-03-15 18:22:49 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-26 11:19:49 +0000 UTC 2020-03-15 18:22:49 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-26 11:19:49 +0000 UTC 2020-03-15 18:22:49 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-26 11:19:49 +0000 UTC 2020-03-15 18:23:41 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.2} {Hostname hunter-control-plane}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3c4716968dac483293a23c2100ad64a5,SystemUUID:683417f7-64ca-431d-b8ac-22e73b26255e,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.13.12,KubeProxyVersion:v1.13.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.2.24] 219889590} {[k8s.gcr.io/kube-apiserver:v1.13.12] 182535474} {[k8s.gcr.io/kube-controller-manager:v1.13.12] 147799876} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.13.12] 82073262} {[k8s.gcr.io/kube-scheduler:v1.13.12] 81117489} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.2.6] 40280546} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[k8s.gcr.io/pause:3.1] 746479}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 26 11:20:00.268: INFO: Logging kubelet events for node hunter-control-plane May 26 11:20:00.270: INFO: Logging pods the kubelet thinks is on node hunter-control-plane May 26 11:20:00.276: INFO: kube-proxy-mmppc started at 2020-03-15 18:23:08 +0000 UTC (0+1 container statuses recorded) May 26 11:20:00.276: INFO: Container kube-proxy ready: true, restart count 0 May 26 11:20:00.276: INFO: kindnet-l2xm6 started at 2020-03-15 18:23:08 +0000 UTC (0+1 container statuses recorded) May 26 11:20:00.276: INFO: Container kindnet-cni ready: true, restart count 0 May 26 11:20:00.276: INFO: local-path-provisioner-77cfdd744c-q47vg started at 2020-03-15 18:23:41 +0000 UTC (0+1 container statuses recorded) May 26 11:20:00.276: INFO: Container local-path-provisioner ready: true, restart count 18 May 26 11:20:00.276: INFO: kube-apiserver-hunter-control-plane started at (0+0 container statuses recorded) May 26 11:20:00.276: INFO: kube-controller-manager-hunter-control-plane started at (0+0 container statuses recorded) May 26 11:20:00.276: INFO: kube-scheduler-hunter-control-plane started at (0+0 container statuses recorded) May 26 11:20:00.276: INFO: etcd-hunter-control-plane started at (0+0 container statuses recorded) W0526 11:20:00.279350 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 11:20:00.370: INFO: Latency metrics for node hunter-control-plane May 26 11:20:00.370: INFO: Logging node info for node hunter-worker May 26 11:20:00.373: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-worker,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-worker,UID:06f62848-66ea-11ea-99e8-0242ac110002,ResourceVersion:12605457,Generation:0,CreationTimestamp:2020-03-15 18:23:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-worker,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-26 11:19:50 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-26 11:19:50 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-26 11:19:50 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-26 11:19:50 +0000 UTC 2020-03-15 18:23:32 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.3} {Hostname hunter-worker}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1ba315df6f584c2d8a0cf4ead2df3551,SystemUUID:64c934e2-ea4e-48d7-92ee-50d04109360b,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.13.12,KubeProxyVersion:v1.13.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.2.24] 219889590} {[k8s.gcr.io/kube-apiserver:v1.13.12] 182535474} {[k8s.gcr.io/kube-controller-manager:v1.13.12] 147799876} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/kube-proxy:v1.13.12] 82073262} {[k8s.gcr.io/kube-scheduler:v1.13.12] 81117489} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[docker.io/library/nginx@sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097 docker.io/library/nginx:latest] 51030575} {[docker.io/library/nginx@sha256:1de8dbae66ccb87c442ac9871987b729d7eee3b5341d9db50607feeeb650631e docker.io/library/nginx@sha256:f85c2305909e5881feec31efcf2a3449e5abd9b55a34522343c4b55ca2c947bb docker.io/library/nginx@sha256:86ae264c3f4acb99b2dee4d0098c40cb8c46dcf9e1148f05d3a51c4df6758c12 docker.io/library/nginx@sha256:404ed8de56dd47adadadf9e2641b1ba6ad5ce69abf251421f91d7601a2808ebe] 51030102} {[docker.io/library/nginx@sha256:d96d2b8f130247d1402389f80a6250382c0882e7fdd5484d2932e813e8b3742f docker.io/library/nginx@sha256:f1a695380f06cf363bf45fa85774cfcb5e60fe1557504715ff96a1933d6cbf51 docker.io/library/nginx@sha256:d81f010955749350ef31a119fb94b180fde8b2f157da351ff5667ae037968b28] 51030066} {[docker.io/library/nginx@sha256:282530fcb7cd19f3848c7b611043f82ae4be3781cb00105a1d593d7e6286b596 docker.io/library/nginx@sha256:e538de36780000ab3502edcdadd1e6990b981abc3f61f13584224b9e1674922c] 51022481} {[docker.io/library/nginx@sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b] 51021980} {[k8s.gcr.io/coredns:1.2.6] 40280546} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 1743226} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 26 11:20:00.373: INFO: Logging kubelet events for node hunter-worker May 26 11:20:00.376: INFO: Logging pods the kubelet thinks is on node hunter-worker May 26 11:20:00.382: INFO: kube-proxy-szbng started at 2020-03-15 18:23:11 +0000 UTC (0+1 container statuses recorded) May 26 11:20:00.382: INFO: Container kube-proxy ready: true, restart count 0 May 26 11:20:00.382: INFO: kindnet-54h7m started at 2020-03-15 18:23:12 +0000 UTC (0+1 container statuses recorded) May 26 11:20:00.382: INFO: Container kindnet-cni ready: true, restart count 0 May 26 11:20:00.382: INFO: coredns-54ff9cd656-4h7lb started at 2020-03-15 18:23:32 +0000 UTC (0+1 container statuses recorded) May 26 11:20:00.382: INFO: Container coredns ready: true, restart count 0 W0526 11:20:00.384782 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 11:20:00.423: INFO: Latency metrics for node hunter-worker May 26 11:20:00.423: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:4m38.349886s} May 26 11:20:00.423: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.5 Latency:4m38.349886s} May 26 11:20:00.423: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:4m38.349886s} May 26 11:20:00.423: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m51.799305s} May 26 11:20:00.423: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m51.799305s} May 26 11:20:00.423: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m51.799305s} May 26 11:20:00.423: INFO: Logging node info for node hunter-worker2 May 26 11:20:00.425: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-worker2,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-worker2,UID:073ca987-66ea-11ea-99e8-0242ac110002,ResourceVersion:12605461,Generation:0,CreationTimestamp:2020-03-15 18:23:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-worker2,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-26 11:19:53 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-26 11:19:53 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-26 11:19:53 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-26 11:19:53 +0000 UTC 2020-03-15 18:23:32 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.4} {Hostname hunter-worker2}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dde8970cf1ce42c0bbb19e593c484fda,SystemUUID:9c4b9179-843d-4e50-859c-2ca9335431a5,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.13.12,KubeProxyVersion:v1.13.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.2.24] 219889590} {[k8s.gcr.io/kube-apiserver:v1.13.12] 182535474} {[k8s.gcr.io/kube-controller-manager:v1.13.12] 147799876} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/kube-proxy:v1.13.12] 82073262} {[k8s.gcr.io/kube-scheduler:v1.13.12] 81117489} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[docker.io/library/nginx@sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097 docker.io/library/nginx:latest] 51030575} {[docker.io/library/nginx@sha256:404ed8de56dd47adadadf9e2641b1ba6ad5ce69abf251421f91d7601a2808ebe docker.io/library/nginx@sha256:86ae264c3f4acb99b2dee4d0098c40cb8c46dcf9e1148f05d3a51c4df6758c12 docker.io/library/nginx@sha256:1de8dbae66ccb87c442ac9871987b729d7eee3b5341d9db50607feeeb650631e] 51030102} {[docker.io/library/nginx@sha256:d81f010955749350ef31a119fb94b180fde8b2f157da351ff5667ae037968b28 docker.io/library/nginx@sha256:d96d2b8f130247d1402389f80a6250382c0882e7fdd5484d2932e813e8b3742f] 51030066} {[docker.io/library/nginx@sha256:282530fcb7cd19f3848c7b611043f82ae4be3781cb00105a1d593d7e6286b596 docker.io/library/nginx@sha256:e538de36780000ab3502edcdadd1e6990b981abc3f61f13584224b9e1674922c] 51022481} {[docker.io/library/nginx@sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b] 51021980} {[k8s.gcr.io/coredns:1.2.6] 40280546} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 1743226} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 26 11:20:00.425: INFO: Logging kubelet events for node hunter-worker2 May 26 11:20:00.428: INFO: Logging pods the kubelet thinks is on node hunter-worker2 May 26 11:20:00.434: INFO: kindnet-mtqrs started at 2020-03-15 18:23:12 +0000 UTC (0+1 container statuses recorded) May 26 11:20:00.434: INFO: Container kindnet-cni ready: true, restart count 0 May 26 11:20:00.434: INFO: coredns-54ff9cd656-8vrkk started at 2020-03-15 18:23:32 +0000 UTC (0+1 container statuses recorded) May 26 11:20:00.434: INFO: Container coredns ready: true, restart count 0 May 26 11:20:00.434: INFO: kube-proxy-s52ll started at 2020-03-15 18:23:12 +0000 UTC (0+1 container statuses recorded) May 26 11:20:00.434: INFO: Container kube-proxy ready: true, restart count 0 W0526 11:20:00.436809 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 11:20:00.477: INFO: Latency metrics for node hunter-worker2 May 26 11:20:00.477: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:4m36.758437s} May 26 11:20:00.477: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:4m36.758437s} May 26 11:20:00.477: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.5 Latency:4m36.758437s} May 26 11:20:00.477: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m56.053507s} May 26 11:20:00.477: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m56.053507s} May 26 11:20:00.477: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m56.053507s} May 26 11:20:00.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bvnr2" for this suite. May 26 11:20:17.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:20:17.689: INFO: namespace: e2e-tests-daemonsets-bvnr2, resource: bindings, ignored listing per whitelist May 26 11:20:17.704: INFO: namespace e2e-tests-daemonsets-bvnr2 deletion completed in 17.224550913s • Failure [671.079 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 error waiting for the failed daemon pod to be completely deleted Expected error: <*errors.errorString | 0xc0000d98a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:272 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:20:17.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 26 11:20:17.822: INFO: Waiting up to 5m0s for pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-lxfjp" to be "success or failure" May 26 11:20:17.826: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321961ms May 26 11:20:19.855: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033806264s May 26 11:20:21.859: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037362588s May 26 11:20:24.737: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.915719687s May 26 11:20:27.954: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.132206605s May 26 11:20:29.972: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.150355702s May 26 11:20:33.515: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.693236538s May 26 11:20:36.003: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.180914661s May 26 11:20:38.006: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.18461261s May 26 11:20:40.852: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.029907316s May 26 11:20:42.855: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.033263614s May 26 11:20:45.136: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 27.314169106s May 26 11:20:47.139: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.317164796s May 26 11:20:49.142: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 31.320743941s May 26 11:20:51.149: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.327174466s May 26 11:20:53.637: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 35.815845512s May 26 11:20:55.679: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.857293997s May 26 11:20:57.750: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 39.927925969s May 26 11:20:59.871: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 42.049673334s May 26 11:21:01.875: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 44.053138525s May 26 11:21:04.002: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 46.180781373s May 26 11:21:06.350: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 48.528310452s May 26 11:21:08.931: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 51.109111405s May 26 11:21:10.934: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 53.112487669s May 26 11:21:12.936: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 55.114610696s May 26 11:21:14.969: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 57.147449189s May 26 11:21:16.989: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 59.167090536s May 26 11:21:18.992: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m1.17082323s May 26 11:21:21.525: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.703179405s May 26 11:21:23.528: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m5.706438239s May 26 11:21:25.532: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m7.710072282s STEP: Saw pod success May 26 11:21:25.532: INFO: Pod "downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:21:25.534: INFO: Trying to get logs from node hunter-worker pod downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018 container dapi-container: STEP: delete the pod May 26 11:21:26.150: INFO: Waiting for pod downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018 to disappear May 26 11:21:26.968: INFO: Pod downward-api-e0e7b3ef-9f42-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:21:26.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lxfjp" for this suite. May 26 11:21:37.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:21:37.102: INFO: namespace: e2e-tests-downward-api-lxfjp, resource: bindings, ignored listing per whitelist May 26 11:21:37.143: INFO: namespace e2e-tests-downward-api-lxfjp deletion completed in 10.1724416s • [SLOW TEST:79.439 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:21:37.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-103be8c8-9f43-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 11:21:37.254: INFO: Waiting up to 5m0s for pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018" in namespace "e2e-tests-configmap-p44j4" to be "success or failure" May 26 11:21:37.263: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.437731ms May 26 11:21:39.267: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012841534s May 26 11:21:41.269: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015284583s May 26 11:21:43.272: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01792295s May 26 11:21:47.320: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.066208416s May 26 11:21:49.746: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.491835791s May 26 11:21:51.749: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.494913774s May 26 11:21:53.932: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.678281364s May 26 11:21:56.917: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.663016092s May 26 11:21:59.742: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.488568234s May 26 11:22:01.842: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 24.588479637s May 26 11:22:03.845: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.591478841s May 26 11:22:07.147: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.89305278s May 26 11:22:09.150: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 31.896175878s May 26 11:22:11.153: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.899520316s May 26 11:22:13.157: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 35.903071676s May 26 11:22:15.160: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.905813594s May 26 11:22:18.136: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 40.882469506s May 26 11:22:20.139: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 42.885119131s May 26 11:22:22.482: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 45.227793234s May 26 11:22:24.485: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 47.231592872s May 26 11:22:27.233: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 49.979491769s May 26 11:22:29.331: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 52.077316143s May 26 11:22:31.807: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 54.553097506s STEP: Saw pod success May 26 11:22:31.807: INFO: Pod "pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:22:32.926: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018 container configmap-volume-test: STEP: delete the pod May 26 11:22:33.421: INFO: Waiting for pod pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018 to disappear May 26 11:22:33.974: INFO: Pod pod-configmaps-103cebfe-9f43-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:22:33.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-p44j4" for this suite. May 26 11:22:50.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:22:52.349: INFO: namespace: e2e-tests-configmap-p44j4, resource: bindings, ignored listing per whitelist May 26 11:22:52.378: INFO: namespace e2e-tests-configmap-p44j4 deletion completed in 15.938663106s • [SLOW TEST:75.234 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:22:52.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 26 11:26:28.986: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:29.241: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:31.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:32.462: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:33.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:34.536: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:35.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:35.244: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:37.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:40.001: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:41.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:44.809: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:45.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:46.106: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:47.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:47.277: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:49.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:50.247: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:51.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:51.391: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:53.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:56.852: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:26:57.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:26:59.249: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:01.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:02.463: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:03.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:03.344: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:05.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:07.208: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:07.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:07.547: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:09.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:11.421: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:13.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:16.042: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:17.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:17.288: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:19.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:19.271: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:21.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:21.930: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:23.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:23.325: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:25.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:25.517: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:27.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:27.386: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:29.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:29.245: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:31.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:31.244: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:33.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:33.443: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:35.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:35.279: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:37.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:37.248: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:39.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:39.245: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:41.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:41.245: INFO: Pod pod-with-prestop-http-hook still exists May 26 11:27:43.241: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 11:27:43.244: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:27:43.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9j55v" for this suite. May 26 11:28:23.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:28:23.344: INFO: namespace: e2e-tests-container-lifecycle-hook-9j55v, resource: bindings, ignored listing per whitelist May 26 11:28:23.345: INFO: namespace e2e-tests-container-lifecycle-hook-9j55v deletion completed in 40.093428007s • [SLOW TEST:330.966 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:28:23.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 26 11:28:23.415: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 11:28:23.424: INFO: Waiting for terminating namespaces to be deleted... May 26 11:28:23.470: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 26 11:28:23.475: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 26 11:28:23.475: INFO: Container kube-proxy ready: true, restart count 0 May 26 11:28:23.475: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 26 11:28:23.475: INFO: Container kindnet-cni ready: true, restart count 0 May 26 11:28:23.475: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 26 11:28:23.475: INFO: Container coredns ready: true, restart count 0 May 26 11:28:23.475: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 26 11:28:23.480: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 26 11:28:23.480: INFO: Container kindnet-cni ready: true, restart count 0 May 26 11:28:23.480: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 26 11:28:23.480: INFO: Container coredns ready: true, restart count 0 May 26 11:28:23.480: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 26 11:28:23.480: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16129081507b9235], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:28:24.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-hvmwb" for this suite. May 26 11:28:30.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:28:30.562: INFO: namespace: e2e-tests-sched-pred-hvmwb, resource: bindings, ignored listing per whitelist May 26 11:28:30.593: INFO: namespace e2e-tests-sched-pred-hvmwb deletion completed in 6.096001673s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.248 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:28:30.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:28:30.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kbbkk" for this suite. May 26 11:28:54.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:28:54.798: INFO: namespace: e2e-tests-pods-kbbkk, resource: bindings, ignored listing per whitelist May 26 11:28:54.810: INFO: namespace e2e-tests-pods-kbbkk deletion completed in 24.090060775s • [SLOW TEST:24.217 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:28:54.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 26 11:30:21.664: INFO: Successfully updated pod "labelsupdate17c645e7-9f44-11ea-b1d1-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:30:23.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bdtv8" for this suite. May 26 11:30:45.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:30:45.760: INFO: namespace: e2e-tests-projected-bdtv8, resource: bindings, ignored listing per whitelist May 26 11:30:45.799: INFO: namespace e2e-tests-projected-bdtv8 deletion completed in 22.100461378s • [SLOW TEST:110.989 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:30:45.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 26 11:30:46.000: INFO: Waiting up to 5m0s for pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-ldsf4" to be "success or failure" May 26 11:30:46.008: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.828973ms May 26 11:30:48.010: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010117797s May 26 11:30:50.013: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013097612s May 26 11:30:52.016: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015932351s May 26 11:30:54.019: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018835734s May 26 11:30:56.101: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.10091249s May 26 11:30:58.598: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.598157211s May 26 11:31:00.601: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.601035965s May 26 11:31:02.604: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.603736631s May 26 11:31:04.606: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 18.606127635s May 26 11:31:06.610: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 20.609542943s May 26 11:31:09.112: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 23.111633053s May 26 11:31:11.155: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 25.154396493s May 26 11:31:13.158: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.157959307s STEP: Saw pod success May 26 11:31:13.158: INFO: Pod "pod-5746f7c3-9f44-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:31:13.162: INFO: Trying to get logs from node hunter-worker2 pod pod-5746f7c3-9f44-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 11:31:13.934: INFO: Waiting for pod pod-5746f7c3-9f44-11ea-b1d1-0242ac110018 to disappear May 26 11:31:14.093: INFO: Pod pod-5746f7c3-9f44-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:31:14.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ldsf4" for this suite. May 26 11:31:22.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:31:22.302: INFO: namespace: e2e-tests-emptydir-ldsf4, resource: bindings, ignored listing per whitelist May 26 11:31:22.312: INFO: namespace e2e-tests-emptydir-ldsf4 deletion completed in 8.216348107s • [SLOW TEST:36.513 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:31:22.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-6d0f766f-9f44-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 11:31:22.523: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-pt7l7" to be "success or failure" May 26 11:31:22.555: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.502336ms May 26 11:31:24.558: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035528838s May 26 11:31:26.562: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039026903s May 26 11:31:28.564: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041600417s May 26 11:31:30.670: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147811502s May 26 11:31:34.640: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.117550658s May 26 11:31:36.643: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.120392167s May 26 11:31:41.705: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.182871703s May 26 11:31:44.323: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.800723105s May 26 11:31:46.327: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.804199317s May 26 11:31:48.330: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.807825529s May 26 11:31:50.485: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 27.962502137s May 26 11:31:52.552: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.029344719s May 26 11:31:54.556: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.03317646s May 26 11:31:59.731: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.208555587s May 26 11:32:03.059: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 40.536573202s May 26 11:32:05.063: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 42.540139249s May 26 11:32:07.066: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 44.543195509s May 26 11:32:09.069: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 46.546676623s May 26 11:32:12.182: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 49.65958284s May 26 11:32:15.174: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 52.651873692s May 26 11:32:25.415: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.892489476s May 26 11:32:27.418: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.895229813s May 26 11:32:30.790: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.267350766s May 26 11:32:32.994: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.47189547s May 26 11:32:35.426: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.902960623s May 26 11:32:38.259: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.736161016s May 26 11:32:40.408: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.885013774s May 26 11:32:43.324: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m20.801337684s May 26 11:32:47.079: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m24.556802287s May 26 11:32:49.082: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m26.559335775s STEP: Saw pod success May 26 11:32:49.082: INFO: Pod "pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:32:49.084: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 26 11:32:49.641: INFO: Waiting for pod pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018 to disappear May 26 11:32:49.725: INFO: Pod pod-projected-configmaps-6d15066e-9f44-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:32:49.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pt7l7" for this suite. May 26 11:32:55.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:32:55.832: INFO: namespace: e2e-tests-projected-pt7l7, resource: bindings, ignored listing per whitelist May 26 11:32:55.855: INFO: namespace e2e-tests-projected-pt7l7 deletion completed in 6.127552468s • [SLOW TEST:93.543 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:32:55.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 26 11:32:55.989: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 11:32:56.002: INFO: Waiting for terminating namespaces to be deleted... May 26 11:32:56.004: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 26 11:32:56.009: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 26 11:32:56.009: INFO: Container kube-proxy ready: true, restart count 0 May 26 11:32:56.009: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 26 11:32:56.009: INFO: Container kindnet-cni ready: true, restart count 0 May 26 11:32:56.009: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 26 11:32:56.009: INFO: Container coredns ready: true, restart count 0 May 26 11:32:56.009: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 26 11:32:56.012: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 26 11:32:56.012: INFO: Container kindnet-cni ready: true, restart count 0 May 26 11:32:56.012: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 26 11:32:56.012: INFO: Container coredns ready: true, restart count 0 May 26 11:32:56.012: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 26 11:32:56.012: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 26 11:32:56.131: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 26 11:32:56.131: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 26 11:32:56.131: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 26 11:32:56.131: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 26 11:32:56.131: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 26 11:32:56.131: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-a4e57b17-9f44-11ea-b1d1-0242ac110018.161290c0ce0f09f3], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-cvmjv/filler-pod-a4e57b17-9f44-11ea-b1d1-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4e57b17-9f44-11ea-b1d1-0242ac110018.161290c2500e2356], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4e57b17-9f44-11ea-b1d1-0242ac110018.161290c2aa19e9f0], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4e57b17-9f44-11ea-b1d1-0242ac110018.161290c3142eac60], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4e62128-9f44-11ea-b1d1-0242ac110018.161290c0d0cd182d], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-cvmjv/filler-pod-a4e62128-9f44-11ea-b1d1-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4e62128-9f44-11ea-b1d1-0242ac110018.161290c338b62b72], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4e62128-9f44-11ea-b1d1-0242ac110018.161290c39ddfba5e], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-a4e62128-9f44-11ea-b1d1-0242ac110018.161290c4097c02c3], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.161290c48b95dffe], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:33:13.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-cvmjv" for this suite. May 26 11:33:19.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:33:19.417: INFO: namespace: e2e-tests-sched-pred-cvmjv, resource: bindings, ignored listing per whitelist May 26 11:33:19.458: INFO: namespace e2e-tests-sched-pred-cvmjv deletion completed in 6.073709655s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:23.603 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:33:19.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-rhs2 STEP: Creating a pod to test atomic-volume-subpath May 26 11:33:20.410: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rhs2" in namespace "e2e-tests-subpath-7pf8b" to be "success or failure" May 26 11:33:20.420: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.606274ms May 26 11:33:22.424: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013638982s May 26 11:33:24.427: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016890209s May 26 11:33:26.492: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081915133s May 26 11:33:28.495: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085077256s May 26 11:33:30.499: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.088647985s May 26 11:33:32.582: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.171986894s May 26 11:33:34.585: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.175469132s May 26 11:33:36.589: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.179138981s May 26 11:33:38.592: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Running", Reason="", readiness=true. Elapsed: 18.182458351s May 26 11:33:40.595: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Running", Reason="", readiness=false. Elapsed: 20.185516754s May 26 11:33:42.599: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Running", Reason="", readiness=false. Elapsed: 22.188866589s May 26 11:33:44.602: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Running", Reason="", readiness=false. Elapsed: 24.192118255s May 26 11:33:46.605: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Running", Reason="", readiness=false. Elapsed: 26.19506752s May 26 11:33:48.608: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Running", Reason="", readiness=false. Elapsed: 28.19807775s May 26 11:33:50.612: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Running", Reason="", readiness=false. Elapsed: 30.201597011s May 26 11:33:52.614: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Running", Reason="", readiness=false. Elapsed: 32.204385736s May 26 11:33:54.618: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Running", Reason="", readiness=false. Elapsed: 34.207611377s May 26 11:33:56.620: INFO: Pod "pod-subpath-test-downwardapi-rhs2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.210569432s STEP: Saw pod success May 26 11:33:56.621: INFO: Pod "pod-subpath-test-downwardapi-rhs2" satisfied condition "success or failure" May 26 11:33:56.623: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-rhs2 container test-container-subpath-downwardapi-rhs2: STEP: delete the pod May 26 11:33:56.642: INFO: Waiting for pod pod-subpath-test-downwardapi-rhs2 to disappear May 26 11:33:56.750: INFO: Pod pod-subpath-test-downwardapi-rhs2 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rhs2 May 26 11:33:56.750: INFO: Deleting pod "pod-subpath-test-downwardapi-rhs2" in namespace "e2e-tests-subpath-7pf8b" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:33:56.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-7pf8b" for this suite. May 26 11:34:02.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:34:02.825: INFO: namespace: e2e-tests-subpath-7pf8b, resource: bindings, ignored listing per whitelist May 26 11:34:02.902: INFO: namespace e2e-tests-subpath-7pf8b deletion completed in 6.121781999s • [SLOW TEST:43.443 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:34:02.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0526 11:34:04.048898 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 11:34:04.048: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:34:04.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9drp4" for this suite. May 26 11:34:10.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:34:10.081: INFO: namespace: e2e-tests-gc-9drp4, resource: bindings, ignored listing per whitelist May 26 11:34:10.140: INFO: namespace e2e-tests-gc-9drp4 deletion completed in 6.086047399s • [SLOW TEST:7.238 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:34:10.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 26 11:34:10.224: INFO: Waiting up to 5m0s for pod "client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018" in namespace "e2e-tests-containers-fqcms" to be "success or failure" May 26 11:34:10.236: INFO: Pod "client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.909358ms May 26 11:34:12.240: INFO: Pod "client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01567255s May 26 11:34:14.242: INFO: Pod "client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018226835s May 26 11:34:16.246: INFO: Pod "client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021980738s May 26 11:34:18.250: INFO: Pod "client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025732091s May 26 11:34:20.253: INFO: Pod "client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 10.029246532s May 26 11:34:22.257: INFO: Pod "client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.032878757s STEP: Saw pod success May 26 11:34:22.257: INFO: Pod "client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:34:22.259: INFO: Trying to get logs from node hunter-worker pod client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 11:34:22.428: INFO: Waiting for pod client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018 to disappear May 26 11:34:22.451: INFO: Pod client-containers-d10e59c5-9f44-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:34:22.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-fqcms" for this suite. May 26 11:34:28.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:34:28.564: INFO: namespace: e2e-tests-containers-fqcms, resource: bindings, ignored listing per whitelist May 26 11:34:28.571: INFO: namespace e2e-tests-containers-fqcms deletion completed in 6.115965099s • [SLOW TEST:18.429 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:34:28.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 26 11:34:28.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:31.163: INFO: stderr: "" May 26 11:34:31.163: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 11:34:31.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:31.270: INFO: stderr: "" May 26 11:34:31.270: INFO: stdout: "update-demo-nautilus-grfj2 update-demo-nautilus-ngzd2 " May 26 11:34:31.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grfj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:31.360: INFO: stderr: "" May 26 11:34:31.360: INFO: stdout: "" May 26 11:34:31.360: INFO: update-demo-nautilus-grfj2 is created but not running May 26 11:34:36.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:36.452: INFO: stderr: "" May 26 11:34:36.452: INFO: stdout: "update-demo-nautilus-grfj2 update-demo-nautilus-ngzd2 " May 26 11:34:36.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grfj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:36.571: INFO: stderr: "" May 26 11:34:36.571: INFO: stdout: "" May 26 11:34:36.571: INFO: update-demo-nautilus-grfj2 is created but not running May 26 11:34:41.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:41.672: INFO: stderr: "" May 26 11:34:41.672: INFO: stdout: "update-demo-nautilus-grfj2 update-demo-nautilus-ngzd2 " May 26 11:34:41.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grfj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:41.762: INFO: stderr: "" May 26 11:34:41.762: INFO: stdout: "true" May 26 11:34:41.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grfj2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:41.994: INFO: stderr: "" May 26 11:34:41.994: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 11:34:41.995: INFO: validating pod update-demo-nautilus-grfj2 May 26 11:34:42.014: INFO: got data: { "image": "nautilus.jpg" } May 26 11:34:42.014: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 11:34:42.014: INFO: update-demo-nautilus-grfj2 is verified up and running May 26 11:34:42.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ngzd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:42.105: INFO: stderr: "" May 26 11:34:42.105: INFO: stdout: "" May 26 11:34:42.105: INFO: update-demo-nautilus-ngzd2 is created but not running May 26 11:34:47.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:47.206: INFO: stderr: "" May 26 11:34:47.206: INFO: stdout: "update-demo-nautilus-grfj2 update-demo-nautilus-ngzd2 " May 26 11:34:47.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grfj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:47.301: INFO: stderr: "" May 26 11:34:47.301: INFO: stdout: "true" May 26 11:34:47.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grfj2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:47.389: INFO: stderr: "" May 26 11:34:47.389: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 11:34:47.389: INFO: validating pod update-demo-nautilus-grfj2 May 26 11:34:47.392: INFO: got data: { "image": "nautilus.jpg" } May 26 11:34:47.392: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 11:34:47.392: INFO: update-demo-nautilus-grfj2 is verified up and running May 26 11:34:47.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ngzd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:47.480: INFO: stderr: "" May 26 11:34:47.480: INFO: stdout: "true" May 26 11:34:47.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ngzd2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:34:47.576: INFO: stderr: "" May 26 11:34:47.577: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 11:34:47.577: INFO: validating pod update-demo-nautilus-ngzd2 May 26 11:34:47.582: INFO: got data: { "image": "nautilus.jpg" } May 26 11:34:47.582: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 11:34:47.582: INFO: update-demo-nautilus-ngzd2 is verified up and running STEP: rolling-update to new replication controller May 26 11:34:47.583: INFO: scanned /root for discovery docs: May 26 11:34:47.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-p7xgg' May 26 11:35:25.299: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 26 11:35:25.299: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 11:35:25.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-p7xgg' May 26 11:35:25.393: INFO: stderr: "" May 26 11:35:25.393: INFO: stdout: "update-demo-kitten-d5vfb update-demo-kitten-wwsbp " May 26 11:35:25.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-d5vfb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:35:25.485: INFO: stderr: "" May 26 11:35:25.485: INFO: stdout: "true" May 26 11:35:25.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-d5vfb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:35:25.577: INFO: stderr: "" May 26 11:35:25.577: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 26 11:35:25.577: INFO: validating pod update-demo-kitten-d5vfb May 26 11:35:25.588: INFO: got data: { "image": "kitten.jpg" } May 26 11:35:25.588: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 26 11:35:25.588: INFO: update-demo-kitten-d5vfb is verified up and running May 26 11:35:25.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wwsbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:35:25.688: INFO: stderr: "" May 26 11:35:25.688: INFO: stdout: "true" May 26 11:35:25.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wwsbp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-p7xgg' May 26 11:35:25.783: INFO: stderr: "" May 26 11:35:25.783: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 26 11:35:25.783: INFO: validating pod update-demo-kitten-wwsbp May 26 11:35:25.787: INFO: got data: { "image": "kitten.jpg" } May 26 11:35:25.787: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 26 11:35:25.787: INFO: update-demo-kitten-wwsbp is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:35:25.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p7xgg" for this suite. May 26 11:35:47.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:35:47.847: INFO: namespace: e2e-tests-kubectl-p7xgg, resource: bindings, ignored listing per whitelist May 26 11:35:47.899: INFO: namespace e2e-tests-kubectl-p7xgg deletion completed in 22.108935046s • [SLOW TEST:79.328 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:35:47.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 26 11:36:02.576: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0b5d1b8e-9f45-11ea-b1d1-0242ac110018" May 26 11:36:02.576: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0b5d1b8e-9f45-11ea-b1d1-0242ac110018" in namespace "e2e-tests-pods-4r8bw" to be "terminated due to deadline exceeded" May 26 11:36:02.638: INFO: Pod "pod-update-activedeadlineseconds-0b5d1b8e-9f45-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 61.760967ms May 26 11:36:04.662: INFO: Pod "pod-update-activedeadlineseconds-0b5d1b8e-9f45-11ea-b1d1-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.085712186s May 26 11:36:04.662: INFO: Pod "pod-update-activedeadlineseconds-0b5d1b8e-9f45-11ea-b1d1-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:36:04.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-4r8bw" for this suite. May 26 11:36:10.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:36:10.721: INFO: namespace: e2e-tests-pods-4r8bw, resource: bindings, ignored listing per whitelist May 26 11:36:10.733: INFO: namespace e2e-tests-pods-4r8bw deletion completed in 6.066727342s • [SLOW TEST:22.833 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:36:10.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 26 11:36:22.902: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-18f99092-9f45-11ea-b1d1-0242ac110018,GenerateName:,Namespace:e2e-tests-events-v42pz,SelfLink:/api/v1/namespaces/e2e-tests-events-v42pz/pods/send-events-18f99092-9f45-11ea-b1d1-0242ac110018,UID:18f9fd46-9f45-11ea-99e8-0242ac110002,ResourceVersion:12607412,Generation:0,CreationTimestamp:2020-05-26 11:36:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 878996118,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nm7zk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nm7zk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-nm7zk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a0d270} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a0d290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:36:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:36:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:36:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:36:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.69,StartTime:2020-05-26 11:36:10 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-26 11:36:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://273679f56eb780cd9c5c9fbe9d0607d3ef15ece81c86aec93969621083f68946}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 26 11:36:24.906: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 26 11:36:26.909: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:36:26.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-v42pz" for this suite. May 26 11:37:16.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:37:16.940: INFO: namespace: e2e-tests-events-v42pz, resource: bindings, ignored listing per whitelist May 26 11:37:17.018: INFO: namespace e2e-tests-events-v42pz deletion completed in 50.098474024s • [SLOW TEST:66.285 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:37:17.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 26 11:37:17.131: INFO: Waiting up to 5m0s for pod "pod-40754c88-9f45-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-nk7xz" to be "success or failure" May 26 11:37:17.136: INFO: Pod "pod-40754c88-9f45-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.67095ms May 26 11:37:19.139: INFO: Pod "pod-40754c88-9f45-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008380541s May 26 11:37:21.143: INFO: Pod "pod-40754c88-9f45-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011718847s May 26 11:37:23.184: INFO: Pod "pod-40754c88-9f45-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053215442s May 26 11:37:25.188: INFO: Pod "pod-40754c88-9f45-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056855564s May 26 11:37:27.191: INFO: Pod "pod-40754c88-9f45-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060399897s May 26 11:37:29.195: INFO: Pod "pod-40754c88-9f45-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063948637s May 26 11:37:31.198: INFO: Pod "pod-40754c88-9f45-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.06681597s STEP: Saw pod success May 26 11:37:31.198: INFO: Pod "pod-40754c88-9f45-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:37:31.200: INFO: Trying to get logs from node hunter-worker2 pod pod-40754c88-9f45-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 11:37:31.269: INFO: Waiting for pod pod-40754c88-9f45-11ea-b1d1-0242ac110018 to disappear May 26 11:37:31.303: INFO: Pod pod-40754c88-9f45-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:37:31.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nk7xz" for this suite. May 26 11:37:37.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:37:37.554: INFO: namespace: e2e-tests-emptydir-nk7xz, resource: bindings, ignored listing per whitelist May 26 11:37:37.564: INFO: namespace e2e-tests-emptydir-nk7xz deletion completed in 6.257957791s • [SLOW TEST:20.546 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:37:37.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4cb7a099-9f45-11ea-b1d1-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4cb7a099-9f45-11ea-b1d1-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:39:04.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bx27l" for this suite. May 26 11:39:26.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:39:26.111: INFO: namespace: e2e-tests-projected-bx27l, resource: bindings, ignored listing per whitelist May 26 11:39:26.123: INFO: namespace e2e-tests-projected-bx27l deletion completed in 22.080195855s • [SLOW TEST:108.559 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:39:26.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 26 11:39:52.253: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:39:52.291: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:39:54.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:39:54.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:39:56.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:39:56.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:39:58.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:39:58.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:00.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:00.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:02.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:02.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:04.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:04.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:06.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:06.296: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:08.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:08.336: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:10.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:10.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:12.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:12.297: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:14.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:14.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:16.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:16.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:18.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:18.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:20.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:20.295: INFO: Pod pod-with-poststart-exec-hook still exists May 26 11:40:22.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 11:40:22.295: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:40:22.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4hr22" for this suite. May 26 11:40:44.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:40:44.334: INFO: namespace: e2e-tests-container-lifecycle-hook-4hr22, resource: bindings, ignored listing per whitelist May 26 11:40:44.406: INFO: namespace e2e-tests-container-lifecycle-hook-4hr22 deletion completed in 22.108217035s • [SLOW TEST:78.283 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:40:44.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 11:40:44.540: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 26 11:40:44.548: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:44.550: INFO: Number of nodes with available pods: 0 May 26 11:40:44.550: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:45.555: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:45.558: INFO: Number of nodes with available pods: 0 May 26 11:40:45.558: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:46.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:46.556: INFO: Number of nodes with available pods: 0 May 26 11:40:46.556: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:47.555: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:47.558: INFO: Number of nodes with available pods: 0 May 26 11:40:47.558: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:48.555: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:48.558: INFO: Number of nodes with available pods: 0 May 26 11:40:48.558: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:49.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:49.557: INFO: Number of nodes with available pods: 0 May 26 11:40:49.557: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:50.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:50.557: INFO: Number of nodes with available pods: 0 May 26 11:40:50.557: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:51.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:51.558: INFO: Number of nodes with available pods: 0 May 26 11:40:51.558: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:52.553: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:52.555: INFO: Number of nodes with available pods: 0 May 26 11:40:52.555: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:53.577: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:53.579: INFO: Number of nodes with available pods: 0 May 26 11:40:53.579: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:54.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:54.557: INFO: Number of nodes with available pods: 0 May 26 11:40:54.557: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:55.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:55.556: INFO: Number of nodes with available pods: 0 May 26 11:40:55.556: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:56.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:56.557: INFO: Number of nodes with available pods: 1 May 26 11:40:56.557: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:57.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:57.557: INFO: Number of nodes with available pods: 1 May 26 11:40:57.557: INFO: Node hunter-worker is running more than one daemon pod May 26 11:40:58.553: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:58.556: INFO: Number of nodes with available pods: 2 May 26 11:40:58.556: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 26 11:40:58.600: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:40:58.600: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:40:58.624: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:40:59.628: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:40:59.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:40:59.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:00.628: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:00.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:00.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:01.627: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:01.627: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:01.630: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:02.627: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:02.627: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:02.630: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:03.628: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:03.628: INFO: Pod daemon-set-fw4gc is not available May 26 11:41:03.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:03.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:04.628: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:04.628: INFO: Pod daemon-set-fw4gc is not available May 26 11:41:04.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:04.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:05.628: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:05.628: INFO: Pod daemon-set-fw4gc is not available May 26 11:41:05.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:05.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:06.628: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:06.628: INFO: Pod daemon-set-fw4gc is not available May 26 11:41:06.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:06.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:07.628: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:07.628: INFO: Pod daemon-set-fw4gc is not available May 26 11:41:07.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:07.633: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:08.628: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:08.628: INFO: Pod daemon-set-fw4gc is not available May 26 11:41:08.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:08.630: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:09.630: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:09.630: INFO: Pod daemon-set-fw4gc is not available May 26 11:41:09.630: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:09.633: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:10.627: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:10.627: INFO: Pod daemon-set-fw4gc is not available May 26 11:41:10.627: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:10.630: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:11.628: INFO: Wrong image for pod: daemon-set-fw4gc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:11.628: INFO: Pod daemon-set-fw4gc is not available May 26 11:41:11.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:11.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:12.627: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:12.627: INFO: Pod daemon-set-mzllc is not available May 26 11:41:12.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:13.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:13.628: INFO: Pod daemon-set-mzllc is not available May 26 11:41:13.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:14.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:14.628: INFO: Pod daemon-set-mzllc is not available May 26 11:41:14.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:15.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:15.629: INFO: Pod daemon-set-mzllc is not available May 26 11:41:15.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:16.627: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:16.627: INFO: Pod daemon-set-mzllc is not available May 26 11:41:16.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:17.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:17.628: INFO: Pod daemon-set-mzllc is not available May 26 11:41:17.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:18.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:18.628: INFO: Pod daemon-set-mzllc is not available May 26 11:41:18.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:19.629: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:19.629: INFO: Pod daemon-set-mzllc is not available May 26 11:41:19.633: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:20.643: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:20.643: INFO: Pod daemon-set-mzllc is not available May 26 11:41:20.646: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:21.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:21.628: INFO: Pod daemon-set-mzllc is not available May 26 11:41:21.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:22.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:22.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:23.649: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:23.652: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:24.627: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:24.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:25.627: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:25.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:26.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:26.628: INFO: Pod daemon-set-gtt8d is not available May 26 11:41:26.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:27.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:27.628: INFO: Pod daemon-set-gtt8d is not available May 26 11:41:27.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:28.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:28.628: INFO: Pod daemon-set-gtt8d is not available May 26 11:41:28.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:29.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:29.628: INFO: Pod daemon-set-gtt8d is not available May 26 11:41:29.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:30.628: INFO: Wrong image for pod: daemon-set-gtt8d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 26 11:41:30.628: INFO: Pod daemon-set-gtt8d is not available May 26 11:41:30.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:31.628: INFO: Pod daemon-set-4zv5d is not available May 26 11:41:31.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 26 11:41:31.635: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:31.638: INFO: Number of nodes with available pods: 1 May 26 11:41:31.638: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:32.643: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:32.646: INFO: Number of nodes with available pods: 1 May 26 11:41:32.646: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:33.643: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:33.646: INFO: Number of nodes with available pods: 1 May 26 11:41:33.646: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:34.642: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:34.646: INFO: Number of nodes with available pods: 1 May 26 11:41:34.646: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:35.642: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:35.646: INFO: Number of nodes with available pods: 1 May 26 11:41:35.646: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:36.643: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:36.646: INFO: Number of nodes with available pods: 1 May 26 11:41:36.646: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:37.642: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:37.645: INFO: Number of nodes with available pods: 1 May 26 11:41:37.645: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:38.642: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:38.645: INFO: Number of nodes with available pods: 1 May 26 11:41:38.645: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:39.717: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:39.719: INFO: Number of nodes with available pods: 1 May 26 11:41:39.719: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:40.643: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:40.646: INFO: Number of nodes with available pods: 1 May 26 11:41:40.646: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:41.642: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:41.646: INFO: Number of nodes with available pods: 1 May 26 11:41:41.646: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:42.645: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:42.648: INFO: Number of nodes with available pods: 1 May 26 11:41:42.648: INFO: Node hunter-worker is running more than one daemon pod May 26 11:41:43.642: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:41:43.644: INFO: Number of nodes with available pods: 2 May 26 11:41:43.644: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-qz2n4, will wait for the garbage collector to delete the pods May 26 11:41:43.716: INFO: Deleting DaemonSet.extensions daemon-set took: 5.444316ms May 26 11:41:43.816: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.235972ms May 26 11:42:01.320: INFO: Number of nodes with available pods: 0 May 26 11:42:01.320: INFO: Number of running nodes: 0, number of available pods: 0 May 26 11:42:01.322: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-qz2n4/daemonsets","resourceVersion":"12608240"},"items":null} May 26 11:42:01.325: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-qz2n4/pods","resourceVersion":"12608240"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:42:01.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-qz2n4" for this suite. May 26 11:42:07.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:42:07.405: INFO: namespace: e2e-tests-daemonsets-qz2n4, resource: bindings, ignored listing per whitelist May 26 11:42:07.434: INFO: namespace e2e-tests-daemonsets-qz2n4 deletion completed in 6.098591273s • [SLOW TEST:83.028 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:42:07.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:43:07.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-z5scn" for this suite. May 26 11:43:29.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:43:29.654: INFO: namespace: e2e-tests-container-probe-z5scn, resource: bindings, ignored listing per whitelist May 26 11:43:29.675: INFO: namespace e2e-tests-container-probe-z5scn deletion completed in 22.069705304s • [SLOW TEST:82.240 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:43:29.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-1e946877-9f46-11ea-b1d1-0242ac110018 STEP: Creating secret with name s-test-opt-upd-1e9468e2-9f46-11ea-b1d1-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1e946877-9f46-11ea-b1d1-0242ac110018 STEP: Updating secret s-test-opt-upd-1e9468e2-9f46-11ea-b1d1-0242ac110018 STEP: Creating secret with name s-test-opt-create-1e946912-9f46-11ea-b1d1-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:45:08.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hpf9d" for this suite. May 26 11:45:46.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:45:46.513: INFO: namespace: e2e-tests-projected-hpf9d, resource: bindings, ignored listing per whitelist May 26 11:45:46.535: INFO: namespace e2e-tests-projected-hpf9d deletion completed in 38.066017288s • [SLOW TEST:136.861 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:45:46.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-702c0994-9f46-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 11:45:46.692: INFO: Waiting up to 5m0s for pod "pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018" in namespace "e2e-tests-configmap-l5m9q" to be "success or failure" May 26 11:45:46.696: INFO: Pod "pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045855ms May 26 11:45:48.700: INFO: Pod "pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007994555s May 26 11:45:50.704: INFO: Pod "pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011470806s May 26 11:45:52.707: INFO: Pod "pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01465673s May 26 11:45:54.709: INFO: Pod "pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017183486s May 26 11:45:56.712: INFO: Pod "pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020231058s May 26 11:45:58.715: INFO: Pod "pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.022657015s STEP: Saw pod success May 26 11:45:58.715: INFO: Pod "pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:45:58.716: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018 container configmap-volume-test: STEP: delete the pod May 26 11:45:58.857: INFO: Waiting for pod pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018 to disappear May 26 11:45:58.949: INFO: Pod pod-configmaps-702ea248-9f46-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:45:58.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l5m9q" for this suite. May 26 11:46:04.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:46:05.000: INFO: namespace: e2e-tests-configmap-l5m9q, resource: bindings, ignored listing per whitelist May 26 11:46:05.040: INFO: namespace e2e-tests-configmap-l5m9q deletion completed in 6.086390969s • [SLOW TEST:18.504 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:46:05.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 11:46:05.128: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 26 11:46:05.149: INFO: Pod name sample-pod: Found 0 pods out of 1 May 26 11:46:10.168: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 26 11:46:22.173: INFO: Creating deployment "test-rolling-update-deployment" May 26 11:46:22.177: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 26 11:46:22.184: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 26 11:46:24.191: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 26 11:46:24.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 11:46:26.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 11:46:28.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 11:46:30.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 11:46:32.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 11:46:34.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726090382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 11:46:36.197: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 26 11:46:36.206: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-6j7ts,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6j7ts/deployments/test-rolling-update-deployment,UID:8555e33f-9f46-11ea-99e8-0242ac110002,ResourceVersion:12608937,Generation:1,CreationTimestamp:2020-05-26 11:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-26 11:46:22 +0000 UTC 2020-05-26 11:46:22 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-26 11:46:34 +0000 UTC 2020-05-26 11:46:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 26 11:46:36.209: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-6j7ts,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6j7ts/replicasets/test-rolling-update-deployment-75db98fb4c,UID:85580348-9f46-11ea-99e8-0242ac110002,ResourceVersion:12608928,Generation:1,CreationTimestamp:2020-05-26 11:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8555e33f-9f46-11ea-99e8-0242ac110002 0xc0016f8257 0xc0016f8258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 26 11:46:36.209: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 26 11:46:36.209: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-6j7ts,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6j7ts/replicasets/test-rolling-update-controller,UID:7b2d1328-9f46-11ea-99e8-0242ac110002,ResourceVersion:12608936,Generation:2,CreationTimestamp:2020-05-26 11:46:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 8555e33f-9f46-11ea-99e8-0242ac110002 0xc0016f8127 0xc0016f8128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 26 11:46:36.212: INFO: Pod "test-rolling-update-deployment-75db98fb4c-h88lm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-h88lm,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-6j7ts,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6j7ts/pods/test-rolling-update-deployment-75db98fb4c-h88lm,UID:855893d2-9f46-11ea-99e8-0242ac110002,ResourceVersion:12608927,Generation:0,CreationTimestamp:2020-05-26 11:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 85580348-9f46-11ea-99e8-0242ac110002 0xc0016f94c7 0xc0016f94c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6pwqt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pwqt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-6pwqt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016f95b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016f95d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:46:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:46:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:46:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:46:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.137,StartTime:2020-05-26 11:46:22 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-26 11:46:33 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://1f385962f5299d29e61b8b441b08b3ed7ca8db70f01159d7f3996fdbff16e909}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:46:36.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6j7ts" for this suite. May 26 11:46:42.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:46:42.286: INFO: namespace: e2e-tests-deployment-6j7ts, resource: bindings, ignored listing per whitelist May 26 11:46:42.326: INFO: namespace e2e-tests-deployment-6j7ts deletion completed in 6.111251231s • [SLOW TEST:37.286 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:46:42.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-918868a5-9f46-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 11:46:42.661: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-rtkdc" to be "success or failure" May 26 11:46:42.707: INFO: Pod "pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 45.769168ms May 26 11:46:44.713: INFO: Pod "pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051939051s May 26 11:46:46.716: INFO: Pod "pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054607467s May 26 11:46:48.815: INFO: Pod "pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153862543s May 26 11:46:50.818: INFO: Pod "pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157242866s May 26 11:46:52.822: INFO: Pod "pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.160876096s May 26 11:46:54.824: INFO: Pod "pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.163313514s May 26 11:46:56.827: INFO: Pod "pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.166095729s STEP: Saw pod success May 26 11:46:56.827: INFO: Pod "pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:46:56.830: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 26 11:46:56.860: INFO: Waiting for pod pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018 to disappear May 26 11:46:56.929: INFO: Pod pod-projected-configmaps-9189dcec-9f46-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:46:56.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rtkdc" for this suite. May 26 11:47:02.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:47:02.983: INFO: namespace: e2e-tests-projected-rtkdc, resource: bindings, ignored listing per whitelist May 26 11:47:03.000: INFO: namespace e2e-tests-projected-rtkdc deletion completed in 6.067635395s • [SLOW TEST:20.674 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:47:03.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 26 11:47:03.154: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-a,UID:9dc15d19-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609049,Generation:0,CreationTimestamp:2020-05-26 11:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 26 11:47:03.154: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-a,UID:9dc15d19-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609049,Generation:0,CreationTimestamp:2020-05-26 11:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 26 11:47:13.160: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-a,UID:9dc15d19-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609069,Generation:0,CreationTimestamp:2020-05-26 11:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 26 11:47:13.160: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-a,UID:9dc15d19-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609069,Generation:0,CreationTimestamp:2020-05-26 11:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 26 11:47:23.167: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-a,UID:9dc15d19-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609089,Generation:0,CreationTimestamp:2020-05-26 11:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 26 11:47:23.167: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-a,UID:9dc15d19-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609089,Generation:0,CreationTimestamp:2020-05-26 11:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 26 11:47:33.172: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-a,UID:9dc15d19-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609109,Generation:0,CreationTimestamp:2020-05-26 11:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 26 11:47:33.172: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-a,UID:9dc15d19-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609109,Generation:0,CreationTimestamp:2020-05-26 11:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 26 11:47:43.178: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-b,UID:b59d7532-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609129,Generation:0,CreationTimestamp:2020-05-26 11:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 26 11:47:43.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-b,UID:b59d7532-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609129,Generation:0,CreationTimestamp:2020-05-26 11:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 26 11:47:53.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-b,UID:b59d7532-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609149,Generation:0,CreationTimestamp:2020-05-26 11:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 26 11:47:53.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q89d2,SelfLink:/api/v1/namespaces/e2e-tests-watch-q89d2/configmaps/e2e-watch-test-configmap-b,UID:b59d7532-9f46-11ea-99e8-0242ac110002,ResourceVersion:12609149,Generation:0,CreationTimestamp:2020-05-26 11:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:48:03.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-q89d2" for this suite. May 26 11:48:09.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:48:09.218: INFO: namespace: e2e-tests-watch-q89d2, resource: bindings, ignored listing per whitelist May 26 11:48:09.256: INFO: namespace e2e-tests-watch-q89d2 deletion completed in 6.062953876s • [SLOW TEST:66.256 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:48:09.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 26 11:48:09.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-zhp5v' May 26 11:48:12.825: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 26 11:48:12.825: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 26 11:48:12.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-zhp5v' May 26 11:48:12.939: INFO: stderr: "" May 26 11:48:12.939: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:48:12.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zhp5v" for this suite. May 26 11:48:34.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:48:34.998: INFO: namespace: e2e-tests-kubectl-zhp5v, resource: bindings, ignored listing per whitelist May 26 11:48:35.049: INFO: namespace e2e-tests-kubectl-zhp5v deletion completed in 22.107096304s • [SLOW TEST:25.793 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:48:35.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-d4931d91-9f46-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 11:48:35.153: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-ffvbw" to be "success or failure" May 26 11:48:35.161: INFO: Pod "pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.579343ms May 26 11:48:37.165: INFO: Pod "pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012121062s May 26 11:48:39.169: INFO: Pod "pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015689426s May 26 11:48:41.172: INFO: Pod "pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019219475s May 26 11:48:43.175: INFO: Pod "pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022335466s May 26 11:48:45.179: INFO: Pod "pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025810479s May 26 11:48:47.182: INFO: Pod "pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.028640393s STEP: Saw pod success May 26 11:48:47.182: INFO: Pod "pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:48:47.184: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 26 11:48:47.234: INFO: Waiting for pod pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018 to disappear May 26 11:48:47.262: INFO: Pod pod-projected-secrets-d495a118-9f46-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:48:47.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ffvbw" for this suite. May 26 11:48:53.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:48:53.318: INFO: namespace: e2e-tests-projected-ffvbw, resource: bindings, ignored listing per whitelist May 26 11:48:53.349: INFO: namespace e2e-tests-projected-ffvbw deletion completed in 6.083526969s • [SLOW TEST:18.300 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:48:53.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 11:49:25.531: INFO: Container started at 2020-05-26 11:49:03 +0000 UTC, pod became ready at 2020-05-26 11:49:24 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:49:25.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lzc7c" for this suite. May 26 11:49:47.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:49:47.577: INFO: namespace: e2e-tests-container-probe-lzc7c, resource: bindings, ignored listing per whitelist May 26 11:49:47.615: INFO: namespace e2e-tests-container-probe-lzc7c deletion completed in 22.080504269s • [SLOW TEST:54.265 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:49:47.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 11:49:47.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-r5sc4" to be "success or failure" May 26 11:49:47.720: INFO: Pod "downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.70149ms May 26 11:49:49.723: INFO: Pod "downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006902493s May 26 11:49:51.727: INFO: Pod "downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010836556s May 26 11:49:53.731: INFO: Pod "downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014645285s May 26 11:49:55.734: INFO: Pod "downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018247625s May 26 11:49:57.738: INFO: Pod "downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 10.021971444s May 26 11:49:59.742: INFO: Pod "downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025582746s STEP: Saw pod success May 26 11:49:59.742: INFO: Pod "downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:49:59.744: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 11:49:59.786: INFO: Waiting for pod downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018 to disappear May 26 11:49:59.914: INFO: Pod downwardapi-volume-ffd63486-9f46-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:49:59.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r5sc4" for this suite. May 26 11:50:05.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:50:05.977: INFO: namespace: e2e-tests-projected-r5sc4, resource: bindings, ignored listing per whitelist May 26 11:50:06.029: INFO: namespace e2e-tests-projected-r5sc4 deletion completed in 6.111435284s • [SLOW TEST:18.414 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:50:06.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 26 11:50:06.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:06.386: INFO: stderr: "" May 26 11:50:06.386: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 11:50:06.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:06.479: INFO: stderr: "" May 26 11:50:06.479: INFO: stdout: "update-demo-nautilus-m458w update-demo-nautilus-pzjl7 " May 26 11:50:06.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m458w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:06.567: INFO: stderr: "" May 26 11:50:06.567: INFO: stdout: "" May 26 11:50:06.567: INFO: update-demo-nautilus-m458w is created but not running May 26 11:50:11.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:11.663: INFO: stderr: "" May 26 11:50:11.663: INFO: stdout: "update-demo-nautilus-m458w update-demo-nautilus-pzjl7 " May 26 11:50:11.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m458w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:11.754: INFO: stderr: "" May 26 11:50:11.754: INFO: stdout: "" May 26 11:50:11.754: INFO: update-demo-nautilus-m458w is created but not running May 26 11:50:16.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:16.861: INFO: stderr: "" May 26 11:50:16.861: INFO: stdout: "update-demo-nautilus-m458w update-demo-nautilus-pzjl7 " May 26 11:50:16.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m458w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:16.940: INFO: stderr: "" May 26 11:50:16.940: INFO: stdout: "" May 26 11:50:16.940: INFO: update-demo-nautilus-m458w is created but not running May 26 11:50:21.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:22.035: INFO: stderr: "" May 26 11:50:22.035: INFO: stdout: "update-demo-nautilus-m458w update-demo-nautilus-pzjl7 " May 26 11:50:22.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m458w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:22.119: INFO: stderr: "" May 26 11:50:22.119: INFO: stdout: "true" May 26 11:50:22.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m458w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:22.220: INFO: stderr: "" May 26 11:50:22.220: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 11:50:22.220: INFO: validating pod update-demo-nautilus-m458w May 26 11:50:22.230: INFO: got data: { "image": "nautilus.jpg" } May 26 11:50:22.231: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 11:50:22.231: INFO: update-demo-nautilus-m458w is verified up and running May 26 11:50:22.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzjl7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:22.326: INFO: stderr: "" May 26 11:50:22.326: INFO: stdout: "true" May 26 11:50:22.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzjl7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:22.417: INFO: stderr: "" May 26 11:50:22.417: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 11:50:22.417: INFO: validating pod update-demo-nautilus-pzjl7 May 26 11:50:22.420: INFO: got data: { "image": "nautilus.jpg" } May 26 11:50:22.420: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 11:50:22.420: INFO: update-demo-nautilus-pzjl7 is verified up and running STEP: scaling down the replication controller May 26 11:50:22.422: INFO: scanned /root for discovery docs: May 26 11:50:22.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:23.565: INFO: stderr: "" May 26 11:50:23.565: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 11:50:23.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:23.658: INFO: stderr: "" May 26 11:50:23.658: INFO: stdout: "update-demo-nautilus-m458w update-demo-nautilus-pzjl7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 26 11:50:28.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:28.757: INFO: stderr: "" May 26 11:50:28.757: INFO: stdout: "update-demo-nautilus-m458w update-demo-nautilus-pzjl7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 26 11:50:33.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:33.863: INFO: stderr: "" May 26 11:50:33.863: INFO: stdout: "update-demo-nautilus-pzjl7 " May 26 11:50:33.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzjl7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:33.949: INFO: stderr: "" May 26 11:50:33.949: INFO: stdout: "true" May 26 11:50:33.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzjl7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:34.047: INFO: stderr: "" May 26 11:50:34.047: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 11:50:34.047: INFO: validating pod update-demo-nautilus-pzjl7 May 26 11:50:34.050: INFO: got data: { "image": "nautilus.jpg" } May 26 11:50:34.050: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 11:50:34.050: INFO: update-demo-nautilus-pzjl7 is verified up and running STEP: scaling up the replication controller May 26 11:50:34.051: INFO: scanned /root for discovery docs: May 26 11:50:34.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:35.177: INFO: stderr: "" May 26 11:50:35.177: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 11:50:35.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:35.287: INFO: stderr: "" May 26 11:50:35.287: INFO: stdout: "update-demo-nautilus-grv4g update-demo-nautilus-pzjl7 " May 26 11:50:35.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grv4g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:35.385: INFO: stderr: "" May 26 11:50:35.385: INFO: stdout: "" May 26 11:50:35.385: INFO: update-demo-nautilus-grv4g is created but not running May 26 11:50:40.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:40.511: INFO: stderr: "" May 26 11:50:40.511: INFO: stdout: "update-demo-nautilus-grv4g update-demo-nautilus-pzjl7 " May 26 11:50:40.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grv4g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:40.604: INFO: stderr: "" May 26 11:50:40.604: INFO: stdout: "" May 26 11:50:40.604: INFO: update-demo-nautilus-grv4g is created but not running May 26 11:50:45.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:45.698: INFO: stderr: "" May 26 11:50:45.698: INFO: stdout: "update-demo-nautilus-grv4g update-demo-nautilus-pzjl7 " May 26 11:50:45.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grv4g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:45.789: INFO: stderr: "" May 26 11:50:45.789: INFO: stdout: "" May 26 11:50:45.789: INFO: update-demo-nautilus-grv4g is created but not running May 26 11:50:50.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:50.888: INFO: stderr: "" May 26 11:50:50.888: INFO: stdout: "update-demo-nautilus-grv4g update-demo-nautilus-pzjl7 " May 26 11:50:50.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grv4g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:50.981: INFO: stderr: "" May 26 11:50:50.981: INFO: stdout: "true" May 26 11:50:50.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grv4g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:51.082: INFO: stderr: "" May 26 11:50:51.083: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 11:50:51.083: INFO: validating pod update-demo-nautilus-grv4g May 26 11:50:51.086: INFO: got data: { "image": "nautilus.jpg" } May 26 11:50:51.087: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 11:50:51.087: INFO: update-demo-nautilus-grv4g is verified up and running May 26 11:50:51.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzjl7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:51.203: INFO: stderr: "" May 26 11:50:51.203: INFO: stdout: "true" May 26 11:50:51.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pzjl7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:51.287: INFO: stderr: "" May 26 11:50:51.287: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 11:50:51.287: INFO: validating pod update-demo-nautilus-pzjl7 May 26 11:50:51.290: INFO: got data: { "image": "nautilus.jpg" } May 26 11:50:51.290: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 11:50:51.290: INFO: update-demo-nautilus-pzjl7 is verified up and running STEP: using delete to clean up resources May 26 11:50:51.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:51.431: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 11:50:51.431: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 26 11:50:51.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-hbf4k' May 26 11:50:51.524: INFO: stderr: "No resources found.\n" May 26 11:50:51.524: INFO: stdout: "" May 26 11:50:51.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-hbf4k -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 26 11:50:51.623: INFO: stderr: "" May 26 11:50:51.624: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:50:51.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hbf4k" for this suite. May 26 11:51:13.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:51:13.647: INFO: namespace: e2e-tests-kubectl-hbf4k, resource: bindings, ignored listing per whitelist May 26 11:51:13.703: INFO: namespace e2e-tests-kubectl-hbf4k deletion completed in 22.075429558s • [SLOW TEST:67.673 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:51:13.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 11:51:13.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 26 11:51:13.933: INFO: stderr: "" May 26 11:51:13.933: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:51:13.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z2znz" for this suite. May 26 11:51:20.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:51:20.188: INFO: namespace: e2e-tests-kubectl-z2znz, resource: bindings, ignored listing per whitelist May 26 11:51:20.224: INFO: namespace e2e-tests-kubectl-z2znz deletion completed in 6.267098601s • [SLOW TEST:6.521 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:51:20.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jll4r [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-jll4r STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-jll4r May 26 11:51:20.355: INFO: Found 0 stateful pods, waiting for 1 May 26 11:51:30.358: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 26 11:51:40.359: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 26 11:51:40.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 26 11:51:40.901: INFO: stderr: "I0526 11:51:40.476280 1598 log.go:172] (0xc00013cdc0) (0xc000211720) Create stream\nI0526 11:51:40.476337 1598 log.go:172] (0xc00013cdc0) (0xc000211720) Stream added, broadcasting: 1\nI0526 11:51:40.480679 1598 log.go:172] (0xc00013cdc0) Reply frame received for 1\nI0526 11:51:40.480723 1598 log.go:172] (0xc00013cdc0) (0xc0004fa6e0) Create stream\nI0526 11:51:40.480743 1598 log.go:172] (0xc00013cdc0) (0xc0004fa6e0) Stream added, broadcasting: 3\nI0526 11:51:40.481722 1598 log.go:172] (0xc00013cdc0) Reply frame received for 3\nI0526 11:51:40.481767 1598 log.go:172] (0xc00013cdc0) (0xc0005de000) Create stream\nI0526 11:51:40.481784 1598 log.go:172] (0xc00013cdc0) (0xc0005de000) Stream added, broadcasting: 5\nI0526 11:51:40.482518 1598 log.go:172] (0xc00013cdc0) Reply frame received for 5\nI0526 11:51:40.894736 1598 log.go:172] (0xc00013cdc0) Data frame received for 3\nI0526 11:51:40.894764 1598 log.go:172] (0xc0004fa6e0) (3) Data frame handling\nI0526 11:51:40.894928 1598 log.go:172] (0xc0004fa6e0) (3) Data frame sent\nI0526 11:51:40.894938 1598 log.go:172] (0xc00013cdc0) Data frame received for 3\nI0526 11:51:40.894945 1598 log.go:172] (0xc0004fa6e0) (3) Data frame handling\nI0526 11:51:40.895377 1598 log.go:172] (0xc00013cdc0) Data frame received for 5\nI0526 11:51:40.895386 1598 log.go:172] (0xc0005de000) (5) Data frame handling\nI0526 11:51:40.896455 1598 log.go:172] (0xc00013cdc0) Data frame received for 1\nI0526 11:51:40.896464 1598 log.go:172] (0xc000211720) (1) Data frame handling\nI0526 11:51:40.896472 1598 log.go:172] (0xc000211720) (1) Data frame sent\nI0526 11:51:40.896480 1598 log.go:172] (0xc00013cdc0) (0xc000211720) Stream removed, broadcasting: 1\nI0526 11:51:40.896599 1598 log.go:172] (0xc00013cdc0) (0xc000211720) Stream removed, broadcasting: 1\nI0526 11:51:40.896610 1598 log.go:172] (0xc00013cdc0) (0xc0004fa6e0) Stream removed, broadcasting: 3\nI0526 11:51:40.896720 1598 log.go:172] (0xc00013cdc0) (0xc0005de000) Stream removed, broadcasting: 5\nI0526 11:51:40.896745 1598 log.go:172] (0xc00013cdc0) Go away received\n" May 26 11:51:40.901: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 26 11:51:40.901: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 26 11:51:40.904: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 26 11:51:50.907: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 11:51:50.907: INFO: Waiting for statefulset status.replicas updated to 0 May 26 11:51:50.919: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:51:50.919: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:51:50.919: INFO: May 26 11:51:50.919: INFO: StatefulSet ss has not reached scale 3, at 1 May 26 11:51:51.923: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993670266s May 26 11:51:52.928: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989695737s May 26 11:51:53.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985133323s May 26 11:51:54.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980888209s May 26 11:51:55.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.976825157s May 26 11:51:56.944: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.971630179s May 26 11:51:57.948: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.968461422s May 26 11:51:59.071: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.964732092s May 26 11:52:00.076: INFO: Verifying statefulset ss doesn't scale past 3 for another 841.4795ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-jll4r May 26 11:52:01.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:52:01.553: INFO: stderr: "I0526 11:52:01.218782 1619 log.go:172] (0xc000138840) (0xc00076e640) Create stream\nI0526 11:52:01.218839 1619 log.go:172] (0xc000138840) (0xc00076e640) Stream added, broadcasting: 1\nI0526 11:52:01.223335 1619 log.go:172] (0xc000138840) Reply frame received for 1\nI0526 11:52:01.223375 1619 log.go:172] (0xc000138840) (0xc00076e6e0) Create stream\nI0526 11:52:01.223389 1619 log.go:172] (0xc000138840) (0xc00076e6e0) Stream added, broadcasting: 3\nI0526 11:52:01.226107 1619 log.go:172] (0xc000138840) Reply frame received for 3\nI0526 11:52:01.226143 1619 log.go:172] (0xc000138840) (0xc000706e60) Create stream\nI0526 11:52:01.226156 1619 log.go:172] (0xc000138840) (0xc000706e60) Stream added, broadcasting: 5\nI0526 11:52:01.226619 1619 log.go:172] (0xc000138840) Reply frame received for 5\nI0526 11:52:01.545927 1619 log.go:172] (0xc000138840) Data frame received for 3\nI0526 11:52:01.546090 1619 log.go:172] (0xc00076e6e0) (3) Data frame handling\nI0526 11:52:01.546114 1619 log.go:172] (0xc00076e6e0) (3) Data frame sent\nI0526 11:52:01.546131 1619 log.go:172] (0xc000138840) Data frame received for 3\nI0526 11:52:01.546167 1619 log.go:172] (0xc00076e6e0) (3) Data frame handling\nI0526 11:52:01.546192 1619 log.go:172] (0xc000138840) Data frame received for 5\nI0526 11:52:01.546203 1619 log.go:172] (0xc000706e60) (5) Data frame handling\nI0526 11:52:01.547307 1619 log.go:172] (0xc000138840) Data frame received for 1\nI0526 11:52:01.547330 1619 log.go:172] (0xc00076e640) (1) Data frame handling\nI0526 11:52:01.547348 1619 log.go:172] (0xc00076e640) (1) Data frame sent\nI0526 11:52:01.547505 1619 log.go:172] (0xc000138840) (0xc00076e640) Stream removed, broadcasting: 1\nI0526 11:52:01.547589 1619 log.go:172] (0xc000138840) Go away received\nI0526 11:52:01.547737 1619 log.go:172] (0xc000138840) (0xc00076e640) Stream removed, broadcasting: 1\nI0526 11:52:01.547756 1619 log.go:172] (0xc000138840) (0xc00076e6e0) Stream removed, broadcasting: 3\nI0526 11:52:01.547777 1619 log.go:172] (0xc000138840) (0xc000706e60) Stream removed, broadcasting: 5\n" May 26 11:52:01.553: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 26 11:52:01.553: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 26 11:52:01.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:52:02.050: INFO: stderr: "I0526 11:52:01.663693 1642 log.go:172] (0xc00079a160) (0xc000694640) Create stream\nI0526 11:52:01.663733 1642 log.go:172] (0xc00079a160) (0xc000694640) Stream added, broadcasting: 1\nI0526 11:52:01.665916 1642 log.go:172] (0xc00079a160) Reply frame received for 1\nI0526 11:52:01.665955 1642 log.go:172] (0xc00079a160) (0xc000534e60) Create stream\nI0526 11:52:01.665970 1642 log.go:172] (0xc00079a160) (0xc000534e60) Stream added, broadcasting: 3\nI0526 11:52:01.666847 1642 log.go:172] (0xc00079a160) Reply frame received for 3\nI0526 11:52:01.666884 1642 log.go:172] (0xc00079a160) (0xc0003a4000) Create stream\nI0526 11:52:01.666901 1642 log.go:172] (0xc00079a160) (0xc0003a4000) Stream added, broadcasting: 5\nI0526 11:52:01.667630 1642 log.go:172] (0xc00079a160) Reply frame received for 5\nI0526 11:52:02.045904 1642 log.go:172] (0xc00079a160) Data frame received for 3\nI0526 11:52:02.045934 1642 log.go:172] (0xc000534e60) (3) Data frame handling\nI0526 11:52:02.045949 1642 log.go:172] (0xc000534e60) (3) Data frame sent\nI0526 11:52:02.045968 1642 log.go:172] (0xc00079a160) Data frame received for 3\nI0526 11:52:02.045980 1642 log.go:172] (0xc000534e60) (3) Data frame handling\nI0526 11:52:02.046003 1642 log.go:172] (0xc00079a160) Data frame received for 5\nI0526 11:52:02.046018 1642 log.go:172] (0xc0003a4000) (5) Data frame handling\nI0526 11:52:02.046036 1642 log.go:172] (0xc0003a4000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0526 11:52:02.046049 1642 log.go:172] (0xc00079a160) Data frame received for 5\nI0526 11:52:02.046059 1642 log.go:172] (0xc0003a4000) (5) Data frame handling\nI0526 11:52:02.048228 1642 log.go:172] (0xc00079a160) Data frame received for 1\nI0526 11:52:02.048241 1642 log.go:172] (0xc000694640) (1) Data frame handling\nI0526 11:52:02.048250 1642 log.go:172] (0xc000694640) (1) Data frame sent\nI0526 11:52:02.048266 1642 log.go:172] (0xc00079a160) (0xc000694640) Stream removed, broadcasting: 1\nI0526 11:52:02.048288 1642 log.go:172] (0xc00079a160) Go away received\nI0526 11:52:02.048431 1642 log.go:172] (0xc00079a160) (0xc000694640) Stream removed, broadcasting: 1\nI0526 11:52:02.048442 1642 log.go:172] (0xc00079a160) (0xc000534e60) Stream removed, broadcasting: 3\nI0526 11:52:02.048449 1642 log.go:172] (0xc00079a160) (0xc0003a4000) Stream removed, broadcasting: 5\n" May 26 11:52:02.050: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 26 11:52:02.050: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 26 11:52:02.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:52:02.180: INFO: rc: 1 May 26 11:52:02.181: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00212f650 exit status 1 true [0xc001186888 0xc0011868a8 0xc0011868c0] [0xc001186888 0xc0011868a8 0xc0011868c0] [0xc0011868a0 0xc0011868b8] [0x935700 0x935700] 0xc0020c3b00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 26 11:52:12.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:52:12.727: INFO: stderr: "I0526 11:52:12.295964 1687 log.go:172] (0xc000732160) (0xc00058e6e0) Create stream\nI0526 11:52:12.295996 1687 log.go:172] (0xc000732160) (0xc00058e6e0) Stream added, broadcasting: 1\nI0526 11:52:12.297970 1687 log.go:172] (0xc000732160) Reply frame received for 1\nI0526 11:52:12.297999 1687 log.go:172] (0xc000732160) (0xc0007b0be0) Create stream\nI0526 11:52:12.298008 1687 log.go:172] (0xc000732160) (0xc0007b0be0) Stream added, broadcasting: 3\nI0526 11:52:12.298736 1687 log.go:172] (0xc000732160) Reply frame received for 3\nI0526 11:52:12.298767 1687 log.go:172] (0xc000732160) (0xc000116000) Create stream\nI0526 11:52:12.298780 1687 log.go:172] (0xc000732160) (0xc000116000) Stream added, broadcasting: 5\nI0526 11:52:12.299361 1687 log.go:172] (0xc000732160) Reply frame received for 5\nI0526 11:52:12.720051 1687 log.go:172] (0xc000732160) Data frame received for 3\nI0526 11:52:12.720102 1687 log.go:172] (0xc0007b0be0) (3) Data frame handling\nI0526 11:52:12.720140 1687 log.go:172] (0xc0007b0be0) (3) Data frame sent\nI0526 11:52:12.720157 1687 log.go:172] (0xc000732160) Data frame received for 3\nI0526 11:52:12.720167 1687 log.go:172] (0xc0007b0be0) (3) Data frame handling\nI0526 11:52:12.720320 1687 log.go:172] (0xc000732160) Data frame received for 5\nI0526 11:52:12.720340 1687 log.go:172] (0xc000116000) (5) Data frame handling\nI0526 11:52:12.720354 1687 log.go:172] (0xc000116000) (5) Data frame sent\nI0526 11:52:12.720371 1687 log.go:172] (0xc000732160) Data frame received for 5\nI0526 11:52:12.720388 1687 log.go:172] (0xc000116000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0526 11:52:12.722376 1687 log.go:172] (0xc000732160) Data frame received for 1\nI0526 11:52:12.722414 1687 log.go:172] (0xc00058e6e0) (1) Data frame handling\nI0526 11:52:12.722435 1687 log.go:172] (0xc00058e6e0) (1) Data frame sent\nI0526 11:52:12.722454 1687 log.go:172] (0xc000732160) (0xc00058e6e0) Stream removed, broadcasting: 1\nI0526 11:52:12.722471 1687 log.go:172] (0xc000732160) Go away received\nI0526 11:52:12.722708 1687 log.go:172] (0xc000732160) (0xc00058e6e0) Stream removed, broadcasting: 1\nI0526 11:52:12.722727 1687 log.go:172] (0xc000732160) (0xc0007b0be0) Stream removed, broadcasting: 3\nI0526 11:52:12.722738 1687 log.go:172] (0xc000732160) (0xc000116000) Stream removed, broadcasting: 5\n" May 26 11:52:12.728: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 26 11:52:12.728: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 26 11:52:12.731: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 26 11:52:12.731: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 26 11:52:12.731: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 26 11:52:12.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 26 11:52:13.155: INFO: stderr: "I0526 11:52:12.841429 1709 log.go:172] (0xc000154630) (0xc000708640) Create stream\nI0526 11:52:12.841469 1709 log.go:172] (0xc000154630) (0xc000708640) Stream added, broadcasting: 1\nI0526 11:52:12.843123 1709 log.go:172] (0xc000154630) Reply frame received for 1\nI0526 11:52:12.843159 1709 log.go:172] (0xc000154630) (0xc0005c4d20) Create stream\nI0526 11:52:12.843171 1709 log.go:172] (0xc000154630) (0xc0005c4d20) Stream added, broadcasting: 3\nI0526 11:52:12.844001 1709 log.go:172] (0xc000154630) Reply frame received for 3\nI0526 11:52:12.844029 1709 log.go:172] (0xc000154630) (0xc0006ee000) Create stream\nI0526 11:52:12.844037 1709 log.go:172] (0xc000154630) (0xc0006ee000) Stream added, broadcasting: 5\nI0526 11:52:12.844706 1709 log.go:172] (0xc000154630) Reply frame received for 5\nI0526 11:52:13.149963 1709 log.go:172] (0xc000154630) Data frame received for 5\nI0526 11:52:13.149984 1709 log.go:172] (0xc0006ee000) (5) Data frame handling\nI0526 11:52:13.149996 1709 log.go:172] (0xc000154630) Data frame received for 3\nI0526 11:52:13.150000 1709 log.go:172] (0xc0005c4d20) (3) Data frame handling\nI0526 11:52:13.150005 1709 log.go:172] (0xc0005c4d20) (3) Data frame sent\nI0526 11:52:13.150008 1709 log.go:172] (0xc000154630) Data frame received for 3\nI0526 11:52:13.150012 1709 log.go:172] (0xc0005c4d20) (3) Data frame handling\nI0526 11:52:13.151158 1709 log.go:172] (0xc000154630) Data frame received for 1\nI0526 11:52:13.151168 1709 log.go:172] (0xc000708640) (1) Data frame handling\nI0526 11:52:13.151176 1709 log.go:172] (0xc000708640) (1) Data frame sent\nI0526 11:52:13.151184 1709 log.go:172] (0xc000154630) (0xc000708640) Stream removed, broadcasting: 1\nI0526 11:52:13.151292 1709 log.go:172] (0xc000154630) (0xc000708640) Stream removed, broadcasting: 1\nI0526 11:52:13.151304 1709 log.go:172] (0xc000154630) (0xc0005c4d20) Stream removed, broadcasting: 3\nI0526 11:52:13.151312 1709 log.go:172] (0xc000154630) (0xc0006ee000) Stream removed, broadcasting: 5\n" May 26 11:52:13.155: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 26 11:52:13.155: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 26 11:52:13.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 26 11:52:13.648: INFO: stderr: "I0526 11:52:13.268173 1731 log.go:172] (0xc00013a630) (0xc0006f4640) Create stream\nI0526 11:52:13.268211 1731 log.go:172] (0xc00013a630) (0xc0006f4640) Stream added, broadcasting: 1\nI0526 11:52:13.270255 1731 log.go:172] (0xc00013a630) Reply frame received for 1\nI0526 11:52:13.270289 1731 log.go:172] (0xc00013a630) (0xc0006f46e0) Create stream\nI0526 11:52:13.270300 1731 log.go:172] (0xc00013a630) (0xc0006f46e0) Stream added, broadcasting: 3\nI0526 11:52:13.271032 1731 log.go:172] (0xc00013a630) Reply frame received for 3\nI0526 11:52:13.271058 1731 log.go:172] (0xc00013a630) (0xc00052ec80) Create stream\nI0526 11:52:13.271068 1731 log.go:172] (0xc00013a630) (0xc00052ec80) Stream added, broadcasting: 5\nI0526 11:52:13.271901 1731 log.go:172] (0xc00013a630) Reply frame received for 5\nI0526 11:52:13.643534 1731 log.go:172] (0xc00013a630) Data frame received for 3\nI0526 11:52:13.643568 1731 log.go:172] (0xc0006f46e0) (3) Data frame handling\nI0526 11:52:13.643589 1731 log.go:172] (0xc0006f46e0) (3) Data frame sent\nI0526 11:52:13.643599 1731 log.go:172] (0xc00013a630) Data frame received for 3\nI0526 11:52:13.643606 1731 log.go:172] (0xc0006f46e0) (3) Data frame handling\nI0526 11:52:13.643628 1731 log.go:172] (0xc00013a630) Data frame received for 5\nI0526 11:52:13.643654 1731 log.go:172] (0xc00052ec80) (5) Data frame handling\nI0526 11:52:13.644895 1731 log.go:172] (0xc00013a630) Data frame received for 1\nI0526 11:52:13.644922 1731 log.go:172] (0xc0006f4640) (1) Data frame handling\nI0526 11:52:13.644943 1731 log.go:172] (0xc0006f4640) (1) Data frame sent\nI0526 11:52:13.644967 1731 log.go:172] (0xc00013a630) (0xc0006f4640) Stream removed, broadcasting: 1\nI0526 11:52:13.645069 1731 log.go:172] (0xc00013a630) Go away received\nI0526 11:52:13.645274 1731 log.go:172] (0xc00013a630) (0xc0006f4640) Stream removed, broadcasting: 1\nI0526 11:52:13.645302 1731 log.go:172] (0xc00013a630) (0xc0006f46e0) Stream removed, broadcasting: 3\nI0526 11:52:13.645321 1731 log.go:172] (0xc00013a630) (0xc00052ec80) Stream removed, broadcasting: 5\n" May 26 11:52:13.648: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 26 11:52:13.648: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 26 11:52:13.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 26 11:52:14.147: INFO: stderr: "I0526 11:52:13.767507 1753 log.go:172] (0xc000138840) (0xc0007e75e0) Create stream\nI0526 11:52:13.767660 1753 log.go:172] (0xc000138840) (0xc0007e75e0) Stream added, broadcasting: 1\nI0526 11:52:13.773805 1753 log.go:172] (0xc000138840) Reply frame received for 1\nI0526 11:52:13.773899 1753 log.go:172] (0xc000138840) (0xc0007e6fa0) Create stream\nI0526 11:52:13.773915 1753 log.go:172] (0xc000138840) (0xc0007e6fa0) Stream added, broadcasting: 3\nI0526 11:52:13.774873 1753 log.go:172] (0xc000138840) Reply frame received for 3\nI0526 11:52:13.774916 1753 log.go:172] (0xc000138840) (0xc000210780) Create stream\nI0526 11:52:13.774940 1753 log.go:172] (0xc000138840) (0xc000210780) Stream added, broadcasting: 5\nI0526 11:52:13.775632 1753 log.go:172] (0xc000138840) Reply frame received for 5\nI0526 11:52:14.141524 1753 log.go:172] (0xc000138840) Data frame received for 3\nI0526 11:52:14.141551 1753 log.go:172] (0xc0007e6fa0) (3) Data frame handling\nI0526 11:52:14.141562 1753 log.go:172] (0xc0007e6fa0) (3) Data frame sent\nI0526 11:52:14.141569 1753 log.go:172] (0xc000138840) Data frame received for 3\nI0526 11:52:14.141575 1753 log.go:172] (0xc0007e6fa0) (3) Data frame handling\nI0526 11:52:14.141670 1753 log.go:172] (0xc000138840) Data frame received for 5\nI0526 11:52:14.141701 1753 log.go:172] (0xc000210780) (5) Data frame handling\nI0526 11:52:14.142660 1753 log.go:172] (0xc000138840) Data frame received for 1\nI0526 11:52:14.142677 1753 log.go:172] (0xc0007e75e0) (1) Data frame handling\nI0526 11:52:14.142686 1753 log.go:172] (0xc0007e75e0) (1) Data frame sent\nI0526 11:52:14.142878 1753 log.go:172] (0xc000138840) (0xc0007e75e0) Stream removed, broadcasting: 1\nI0526 11:52:14.142892 1753 log.go:172] (0xc000138840) Go away received\nI0526 11:52:14.143116 1753 log.go:172] (0xc000138840) (0xc0007e75e0) Stream removed, broadcasting: 1\nI0526 11:52:14.143146 1753 log.go:172] (0xc000138840) (0xc0007e6fa0) Stream removed, broadcasting: 3\nI0526 11:52:14.143168 1753 log.go:172] (0xc000138840) (0xc000210780) Stream removed, broadcasting: 5\n" May 26 11:52:14.147: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 26 11:52:14.147: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 26 11:52:14.147: INFO: Waiting for statefulset status.replicas updated to 0 May 26 11:52:14.150: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 26 11:52:24.158: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 11:52:24.158: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 26 11:52:24.158: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 26 11:52:24.172: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:52:24.172: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:52:24.172: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:24.172: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:24.172: INFO: May 26 11:52:24.172: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 11:52:25.176: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:52:25.176: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:52:25.176: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:25.176: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:25.176: INFO: May 26 11:52:25.176: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 11:52:26.180: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:52:26.180: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:52:26.180: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:26.180: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:26.180: INFO: May 26 11:52:26.180: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 11:52:27.184: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:52:27.184: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:52:27.184: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:27.184: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:27.185: INFO: May 26 11:52:27.185: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 11:52:28.305: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:52:28.305: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:52:28.305: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:28.305: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:28.305: INFO: May 26 11:52:28.305: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 11:52:29.309: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:52:29.309: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:52:29.309: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:29.309: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:29.309: INFO: May 26 11:52:29.309: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 11:52:30.313: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:52:30.313: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:52:30.314: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:30.314: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:30.314: INFO: May 26 11:52:30.314: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 11:52:31.329: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:52:31.329: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:52:31.329: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:31.329: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:50 +0000 UTC }] May 26 11:52:31.329: INFO: May 26 11:52:31.329: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 11:52:32.334: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:52:32.334: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:52:32.334: INFO: May 26 11:52:32.334: INFO: StatefulSet ss has not reached scale 0, at 1 May 26 11:52:33.338: INFO: POD NODE PHASE GRACE CONDITIONS May 26 11:52:33.338: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:52:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 11:51:20 +0000 UTC }] May 26 11:52:33.338: INFO: May 26 11:52:33.338: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-jll4r May 26 11:52:34.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:52:34.462: INFO: rc: 1 May 26 11:52:34.462: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00170eea0 exit status 1 true [0xc00040ef48 0xc00040ef68 0xc00040ef90] [0xc00040ef48 0xc00040ef68 0xc00040ef90] [0xc00040ef58 0xc00040ef80] [0x935700 0x935700] 0xc001885980 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 26 11:52:44.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:52:44.566: INFO: rc: 1 May 26 11:52:44.566: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b67710 exit status 1 true [0xc0003814c8 0xc0003815e0 0xc000381638] [0xc0003814c8 0xc0003815e0 0xc000381638] [0xc0003815c8 0xc000381618] [0x935700 0x935700] 0xc001e9e960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:52:54.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:52:54.651: INFO: rc: 1 May 26 11:52:54.651: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00170efc0 exit status 1 true [0xc00040ef98 0xc00040efd0 0xc00040f000] [0xc00040ef98 0xc00040efd0 0xc00040f000] [0xc00040efb8 0xc00040eff0] [0x935700 0x935700] 0xc001885c20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:53:04.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:53:04.739: INFO: rc: 1 May 26 11:53:04.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00002b3b0 exit status 1 true [0xc000bf4008 0xc000bf4020 0xc000bf4038] [0xc000bf4008 0xc000bf4020 0xc000bf4038] [0xc000bf4018 0xc000bf4030] [0x935700 0x935700] 0xc00103a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:53:14.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:53:14.828: INFO: rc: 1 May 26 11:53:14.828: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00002b4d0 exit status 1 true [0xc000bf4040 0xc000bf4058 0xc000bf4070] [0xc000bf4040 0xc000bf4058 0xc000bf4070] [0xc000bf4050 0xc000bf4068] [0x935700 0x935700] 0xc00103a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:53:24.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:53:24.912: INFO: rc: 1 May 26 11:53:24.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001dee960 exit status 1 true [0xc000100690 0xc0001006d8 0xc000100750] [0xc000100690 0xc0001006d8 0xc000100750] [0xc0001006a8 0xc000100748] [0x935700 0x935700] 0xc00211fc20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:53:34.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:53:34.993: INFO: rc: 1 May 26 11:53:34.993: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00002b5f0 exit status 1 true [0xc000bf4078 0xc000bf4090 0xc000bf40b0] [0xc000bf4078 0xc000bf4090 0xc000bf40b0] [0xc000bf4088 0xc000bf40a8] [0x935700 0x935700] 0xc00103a7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:53:44.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:53:45.066: INFO: rc: 1 May 26 11:53:45.066: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00002b740 exit status 1 true [0xc000bf40b8 0xc000bf40d0 0xc000bf40e8] [0xc000bf40b8 0xc000bf40d0 0xc000bf40e8] [0xc000bf40c8 0xc000bf40e0] [0x935700 0x935700] 0xc00103bbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:53:55.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:53:55.167: INFO: rc: 1 May 26 11:53:55.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00002b890 exit status 1 true [0xc000bf40f0 0xc000bf4108 0xc000bf4128] [0xc000bf40f0 0xc000bf4108 0xc000bf4128] [0xc000bf4100 0xc000bf4120] [0x935700 0x935700] 0xc000d426c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:54:05.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:54:05.263: INFO: rc: 1 May 26 11:54:05.263: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0003b0fc0 exit status 1 true [0xc0017f0008 0xc0017f0020 0xc0017f0038] [0xc0017f0008 0xc0017f0020 0xc0017f0038] [0xc0017f0018 0xc0017f0030] [0x935700 0x935700] 0xc0021341e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:54:15.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:54:15.346: INFO: rc: 1 May 26 11:54:15.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db6120 exit status 1 true [0xc00000e010 0xc000380c40 0xc000380d28] [0xc00000e010 0xc000380c40 0xc000380d28] [0xc000380be0 0xc000380ce0] [0x935700 0x935700] 0xc00103a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:54:25.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:54:25.435: INFO: rc: 1 May 26 11:54:25.435: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db6270 exit status 1 true [0xc000380d60 0xc000380e88 0xc0003810a0] [0xc000380d60 0xc000380e88 0xc0003810a0] [0xc000380dd0 0xc000381008] [0x935700 0x935700] 0xc00103a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:54:35.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:54:35.525: INFO: rc: 1 May 26 11:54:35.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db63c0 exit status 1 true [0xc0003810c0 0xc000381250 0xc0003813d0] [0xc0003810c0 0xc000381250 0xc0003813d0] [0xc0003811e8 0xc000381388] [0x935700 0x935700] 0xc00103a7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:54:45.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:54:45.604: INFO: rc: 1 May 26 11:54:45.604: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db6510 exit status 1 true [0xc000381460 0xc000381518 0xc000381610] [0xc000381460 0xc000381518 0xc000381610] [0xc0003814c8 0xc0003815e0] [0x935700 0x935700] 0xc00103bbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:54:55.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:54:55.700: INFO: rc: 1 May 26 11:54:55.700: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f64120 exit status 1 true [0xc0017f0040 0xc0017f0058 0xc0017f0070] [0xc0017f0040 0xc0017f0058 0xc0017f0070] [0xc0017f0050 0xc0017f0068] [0x935700 0x935700] 0xc001ff01e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:55:05.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:55:05.796: INFO: rc: 1 May 26 11:55:05.796: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b66120 exit status 1 true [0xc00040e170 0xc00040e328 0xc00040e3f8] [0xc00040e170 0xc00040e328 0xc00040e3f8] [0xc00040e1b8 0xc00040e3d0] [0x935700 0x935700] 0xc001e9e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:55:15.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:55:15.882: INFO: rc: 1 May 26 11:55:15.882: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db6720 exit status 1 true [0xc000381618 0xc0003816e0 0xc0003817a8] [0xc000381618 0xc0003816e0 0xc0003817a8] [0xc0003816d8 0xc000381750] [0x935700 0x935700] 0xc002134c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:55:25.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:55:25.974: INFO: rc: 1 May 26 11:55:25.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b662a0 exit status 1 true [0xc00040e420 0xc00040e540 0xc00040e5a8] [0xc00040e420 0xc00040e540 0xc00040e5a8] [0xc00040e500 0xc00040e580] [0x935700 0x935700] 0xc001e9e480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:55:35.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:55:36.063: INFO: rc: 1 May 26 11:55:36.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db6870 exit status 1 true [0xc0003817f0 0xc0003818c8 0xc000381978] [0xc0003817f0 0xc0003818c8 0xc000381978] [0xc000381890 0xc000381940] [0x935700 0x935700] 0xc002135020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:55:46.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:55:46.148: INFO: rc: 1 May 26 11:55:46.149: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db6990 exit status 1 true [0xc0003819b0 0xc000381a48 0xc000381ad8] [0xc0003819b0 0xc000381a48 0xc000381ad8] [0xc000381a10 0xc000381ac0] [0x935700 0x935700] 0xc002135500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:55:56.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:55:56.227: INFO: rc: 1 May 26 11:55:56.227: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db6c30 exit status 1 true [0xc000381af0 0xc000381bb8 0xc000381c08] [0xc000381af0 0xc000381bb8 0xc000381c08] [0xc000381b98 0xc000381be0] [0x935700 0x935700] 0xc0018852c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:56:06.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:56:06.318: INFO: rc: 1 May 26 11:56:06.318: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db6e40 exit status 1 true [0xc000381c18 0xc000381c70 0xc000381cc0] [0xc000381c18 0xc000381c70 0xc000381cc0] [0xc000381c38 0xc000381cb0] [0x935700 0x935700] 0xc001885620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:56:16.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:56:16.401: INFO: rc: 1 May 26 11:56:16.401: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0003b1020 exit status 1 true [0xc00000e010 0xc00040e1b8 0xc00040e3d0] [0xc00000e010 0xc00040e1b8 0xc00040e3d0] [0xc00040e178 0xc00040e368] [0x935700 0x935700] 0xc00103a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:56:26.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:56:26.511: INFO: rc: 1 May 26 11:56:26.511: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0003b1140 exit status 1 true [0xc00040e3f8 0xc00040e500 0xc00040e580] [0xc00040e3f8 0xc00040e500 0xc00040e580] [0xc00040e478 0xc00040e560] [0x935700 0x935700] 0xc00103a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:56:36.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:56:36.583: INFO: rc: 1 May 26 11:56:36.583: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b66150 exit status 1 true [0xc000380bc0 0xc000380c60 0xc000380d60] [0xc000380bc0 0xc000380c60 0xc000380d60] [0xc000380c40 0xc000380d28] [0x935700 0x935700] 0xc0021341e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:56:46.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:56:46.680: INFO: rc: 1 May 26 11:56:46.680: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db6150 exit status 1 true [0xc0017f0000 0xc0017f0018 0xc0017f0030] [0xc0017f0000 0xc0017f0018 0xc0017f0030] [0xc0017f0010 0xc0017f0028] [0x935700 0x935700] 0xc001e9e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:56:56.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:56:56.761: INFO: rc: 1 May 26 11:56:56.761: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0003b1290 exit status 1 true [0xc00040e5a8 0xc00040e638 0xc00040e6b0] [0xc00040e5a8 0xc00040e638 0xc00040e6b0] [0xc00040e610 0xc00040e680] [0x935700 0x935700] 0xc00103a7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:57:06.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:57:06.850: INFO: rc: 1 May 26 11:57:06.850: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f64150 exit status 1 true [0xc000bf4008 0xc000bf4020 0xc000bf4038] [0xc000bf4008 0xc000bf4020 0xc000bf4038] [0xc000bf4018 0xc000bf4030] [0x935700 0x935700] 0xc001885920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:57:16.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:57:16.924: INFO: rc: 1 May 26 11:57:16.924: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001db62a0 exit status 1 true [0xc0017f0038 0xc0017f0050 0xc0017f0068] [0xc0017f0038 0xc0017f0050 0xc0017f0068] [0xc0017f0048 0xc0017f0060] [0x935700 0x935700] 0xc001e9e480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:57:26.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:57:27.010: INFO: rc: 1 May 26 11:57:27.010: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0003b1410 exit status 1 true [0xc00040e6b8 0xc00040e7d8 0xc00040ece0] [0xc00040e6b8 0xc00040e7d8 0xc00040ece0] [0xc00040e7a8 0xc00040ecd8] [0x935700 0x935700] 0xc00103bbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 26 11:57:37.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jll4r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 11:57:37.091: INFO: rc: 1 May 26 11:57:37.091: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 26 11:57:37.091: INFO: Scaling statefulset ss to 0 May 26 11:57:37.098: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 26 11:57:37.100: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jll4r May 26 11:57:37.102: INFO: Scaling statefulset ss to 0 May 26 11:57:37.109: INFO: Waiting for statefulset status.replicas updated to 0 May 26 11:57:37.111: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:57:37.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jll4r" for this suite. May 26 11:57:45.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:57:45.430: INFO: namespace: e2e-tests-statefulset-jll4r, resource: bindings, ignored listing per whitelist May 26 11:57:45.445: INFO: namespace e2e-tests-statefulset-jll4r deletion completed in 8.277171952s • [SLOW TEST:385.221 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:57:45.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 26 11:57:45.556: INFO: Waiting up to 5m0s for pod "downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-mn45v" to be "success or failure" May 26 11:57:45.591: INFO: Pod "downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.955521ms May 26 11:57:47.594: INFO: Pod "downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038245659s May 26 11:57:49.598: INFO: Pod "downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041605714s May 26 11:57:51.602: INFO: Pod "downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045670869s May 26 11:57:53.606: INFO: Pod "downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050173425s May 26 11:57:55.610: INFO: Pod "downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054211039s May 26 11:57:57.613: INFO: Pod "downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.057205379s May 26 11:57:59.627: INFO: Pod "downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.070679368s STEP: Saw pod success May 26 11:57:59.627: INFO: Pod "downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:57:59.629: INFO: Trying to get logs from node hunter-worker2 pod downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018 container dapi-container: STEP: delete the pod May 26 11:57:59.649: INFO: Waiting for pod downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018 to disappear May 26 11:57:59.653: INFO: Pod downward-api-1ca8e7da-9f48-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:57:59.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mn45v" for this suite. May 26 11:58:05.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:58:05.702: INFO: namespace: e2e-tests-downward-api-mn45v, resource: bindings, ignored listing per whitelist May 26 11:58:05.745: INFO: namespace e2e-tests-downward-api-mn45v deletion completed in 6.089091951s • [SLOW TEST:20.299 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:58:05.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-28be9f55-9f48-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 11:58:05.890: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-fxr5q" to be "success or failure" May 26 11:58:05.892: INFO: Pod "pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.61955ms May 26 11:58:07.900: INFO: Pod "pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009868629s May 26 11:58:09.903: INFO: Pod "pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01339364s May 26 11:58:11.906: INFO: Pod "pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016856501s May 26 11:58:14.026: INFO: Pod "pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136213714s May 26 11:58:16.030: INFO: Pod "pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.139942971s May 26 11:58:18.033: INFO: Pod "pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.143390766s STEP: Saw pod success May 26 11:58:18.033: INFO: Pod "pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 11:58:18.035: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 26 11:58:18.067: INFO: Waiting for pod pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018 to disappear May 26 11:58:18.083: INFO: Pod pod-projected-configmaps-28c77f9a-9f48-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:58:18.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fxr5q" for this suite. May 26 11:58:24.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:58:24.159: INFO: namespace: e2e-tests-projected-fxr5q, resource: bindings, ignored listing per whitelist May 26 11:58:24.164: INFO: namespace e2e-tests-projected-fxr5q deletion completed in 6.077445504s • [SLOW TEST:18.419 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:58:24.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 11:58:24.272: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 11:58:36.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-q4q4n" for this suite. May 26 11:59:26.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 11:59:26.853: INFO: namespace: e2e-tests-pods-q4q4n, resource: bindings, ignored listing per whitelist May 26 11:59:26.872: INFO: namespace e2e-tests-pods-q4q4n deletion completed in 50.091542044s • [SLOW TEST:62.708 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 11:59:26.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 26 11:59:26.972: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:26.973: INFO: Number of nodes with available pods: 0 May 26 11:59:26.973: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:27.979: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:27.982: INFO: Number of nodes with available pods: 0 May 26 11:59:27.982: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:28.977: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:28.980: INFO: Number of nodes with available pods: 0 May 26 11:59:28.980: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:29.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:29.981: INFO: Number of nodes with available pods: 0 May 26 11:59:29.982: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:30.977: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:30.979: INFO: Number of nodes with available pods: 0 May 26 11:59:30.979: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:31.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:31.980: INFO: Number of nodes with available pods: 0 May 26 11:59:31.980: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:32.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:32.980: INFO: Number of nodes with available pods: 0 May 26 11:59:32.980: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:33.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:33.980: INFO: Number of nodes with available pods: 0 May 26 11:59:33.980: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:34.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:34.981: INFO: Number of nodes with available pods: 0 May 26 11:59:34.981: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:36.211: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:36.213: INFO: Number of nodes with available pods: 0 May 26 11:59:36.213: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:36.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:36.981: INFO: Number of nodes with available pods: 0 May 26 11:59:36.981: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:37.977: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:37.979: INFO: Number of nodes with available pods: 0 May 26 11:59:37.979: INFO: Node hunter-worker is running more than one daemon pod May 26 11:59:38.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:38.981: INFO: Number of nodes with available pods: 1 May 26 11:59:38.981: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:40.025: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:40.028: INFO: Number of nodes with available pods: 1 May 26 11:59:40.028: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:40.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:40.981: INFO: Number of nodes with available pods: 1 May 26 11:59:40.981: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:41.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:41.981: INFO: Number of nodes with available pods: 1 May 26 11:59:41.981: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:42.978: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:42.980: INFO: Number of nodes with available pods: 2 May 26 11:59:42.980: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 26 11:59:42.992: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:42.994: INFO: Number of nodes with available pods: 1 May 26 11:59:42.994: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:43.999: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:44.001: INFO: Number of nodes with available pods: 1 May 26 11:59:44.002: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:44.998: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:45.000: INFO: Number of nodes with available pods: 1 May 26 11:59:45.000: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:45.999: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:46.002: INFO: Number of nodes with available pods: 1 May 26 11:59:46.002: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:46.998: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:47.001: INFO: Number of nodes with available pods: 1 May 26 11:59:47.001: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:47.997: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:47.999: INFO: Number of nodes with available pods: 1 May 26 11:59:47.999: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:48.998: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:49.000: INFO: Number of nodes with available pods: 1 May 26 11:59:49.000: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:49.999: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:50.002: INFO: Number of nodes with available pods: 1 May 26 11:59:50.002: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:50.998: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:51.001: INFO: Number of nodes with available pods: 1 May 26 11:59:51.001: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:51.999: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:52.002: INFO: Number of nodes with available pods: 1 May 26 11:59:52.002: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:52.998: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:53.001: INFO: Number of nodes with available pods: 1 May 26 11:59:53.001: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:53.999: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:54.002: INFO: Number of nodes with available pods: 1 May 26 11:59:54.002: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:54.999: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:55.002: INFO: Number of nodes with available pods: 1 May 26 11:59:55.002: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:55.998: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:56.001: INFO: Number of nodes with available pods: 1 May 26 11:59:56.001: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:56.998: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:57.000: INFO: Number of nodes with available pods: 1 May 26 11:59:57.001: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:57.998: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:58.002: INFO: Number of nodes with available pods: 1 May 26 11:59:58.002: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:58.998: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 11:59:59.000: INFO: Number of nodes with available pods: 1 May 26 11:59:59.000: INFO: Node hunter-worker2 is running more than one daemon pod May 26 11:59:59.999: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 12:00:00.002: INFO: Number of nodes with available pods: 1 May 26 12:00:00.002: INFO: Node hunter-worker2 is running more than one daemon pod May 26 12:00:00.999: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 12:00:01.002: INFO: Number of nodes with available pods: 1 May 26 12:00:01.002: INFO: Node hunter-worker2 is running more than one daemon pod May 26 12:00:01.998: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 12:00:02.001: INFO: Number of nodes with available pods: 2 May 26 12:00:02.001: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-r86m4, will wait for the garbage collector to delete the pods May 26 12:00:02.061: INFO: Deleting DaemonSet.extensions daemon-set took: 5.049518ms May 26 12:00:02.161: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.197793ms May 26 12:00:11.778: INFO: Number of nodes with available pods: 0 May 26 12:00:11.779: INFO: Number of running nodes: 0, number of available pods: 0 May 26 12:00:11.781: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-r86m4/daemonsets","resourceVersion":"12611053"},"items":null} May 26 12:00:11.783: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-r86m4/pods","resourceVersion":"12611053"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:00:11.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-r86m4" for this suite. May 26 12:00:17.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:00:17.824: INFO: namespace: e2e-tests-daemonsets-r86m4, resource: bindings, ignored listing per whitelist May 26 12:00:17.855: INFO: namespace e2e-tests-daemonsets-r86m4 deletion completed in 6.064336572s • [SLOW TEST:50.983 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:00:17.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 12:00:18.076: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 26 12:00:23.080: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 26 12:00:29.086: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 26 12:00:29.112: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-gz4lt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gz4lt/deployments/test-cleanup-deployment,UID:7e233b99-9f48-11ea-99e8-0242ac110002,ResourceVersion:12611126,Generation:1,CreationTimestamp:2020-05-26 12:00:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 26 12:00:29.150: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 26 12:00:29.150: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 26 12:00:29.150: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-gz4lt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gz4lt/replicasets/test-cleanup-controller,UID:7788da2b-9f48-11ea-99e8-0242ac110002,ResourceVersion:12611127,Generation:1,CreationTimestamp:2020-05-26 12:00:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 7e233b99-9f48-11ea-99e8-0242ac110002 0xc0014f9ab7 0xc0014f9ab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 26 12:00:29.192: INFO: Pod "test-cleanup-controller-wrfr2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-wrfr2,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-gz4lt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gz4lt/pods/test-cleanup-controller-wrfr2,UID:77996dee-9f48-11ea-99e8-0242ac110002,ResourceVersion:12611122,Generation:0,CreationTimestamp:2020-05-26 12:00:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 7788da2b-9f48-11ea-99e8-0242ac110002 0xc0018a2d17 0xc0018a2d18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kd5g6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kd5g6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kd5g6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018a2e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018a2ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:00:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:00:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:00:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:00:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.146,StartTime:2020-05-26 12:00:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-26 12:00:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b0262377d7db5a7cb7891a82fa044bc256872ace967f43eff59409aacc55b77e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:00:29.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gz4lt" for this suite. May 26 12:00:37.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:00:37.342: INFO: namespace: e2e-tests-deployment-gz4lt, resource: bindings, ignored listing per whitelist May 26 12:00:37.352: INFO: namespace e2e-tests-deployment-gz4lt deletion completed in 8.14241936s • [SLOW TEST:19.497 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:00:37.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-ctrbr STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 12:00:37.450: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 26 12:01:17.613: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.85:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-ctrbr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 12:01:17.614: INFO: >>> kubeConfig: /root/.kube/config I0526 12:01:17.641448 6 log.go:172] (0xc001c1c2c0) (0xc000f8bcc0) Create stream I0526 12:01:17.641470 6 log.go:172] (0xc001c1c2c0) (0xc000f8bcc0) Stream added, broadcasting: 1 I0526 12:01:17.642936 6 log.go:172] (0xc001c1c2c0) Reply frame received for 1 I0526 12:01:17.642965 6 log.go:172] (0xc001c1c2c0) (0xc001194000) Create stream I0526 12:01:17.642979 6 log.go:172] (0xc001c1c2c0) (0xc001194000) Stream added, broadcasting: 3 I0526 12:01:17.643637 6 log.go:172] (0xc001c1c2c0) Reply frame received for 3 I0526 12:01:17.643660 6 log.go:172] (0xc001c1c2c0) (0xc0016a6960) Create stream I0526 12:01:17.643673 6 log.go:172] (0xc001c1c2c0) (0xc0016a6960) Stream added, broadcasting: 5 I0526 12:01:17.644289 6 log.go:172] (0xc001c1c2c0) Reply frame received for 5 I0526 12:01:17.996062 6 log.go:172] (0xc001c1c2c0) Data frame received for 3 I0526 12:01:17.996086 6 log.go:172] (0xc001194000) (3) Data frame handling I0526 12:01:17.996099 6 log.go:172] (0xc001194000) (3) Data frame sent I0526 12:01:17.996246 6 log.go:172] (0xc001c1c2c0) Data frame received for 3 I0526 12:01:17.996260 6 log.go:172] (0xc001194000) (3) Data frame handling I0526 12:01:17.996292 6 log.go:172] (0xc001c1c2c0) Data frame received for 5 I0526 12:01:17.996321 6 log.go:172] (0xc0016a6960) (5) Data frame handling I0526 12:01:17.997985 6 log.go:172] (0xc001c1c2c0) Data frame received for 1 I0526 12:01:17.998002 6 log.go:172] (0xc000f8bcc0) (1) Data frame handling I0526 12:01:17.998015 6 log.go:172] (0xc000f8bcc0) (1) Data frame sent I0526 12:01:17.998023 6 log.go:172] (0xc001c1c2c0) (0xc000f8bcc0) Stream removed, broadcasting: 1 I0526 12:01:17.998051 6 log.go:172] (0xc001c1c2c0) Go away received I0526 12:01:17.998092 6 log.go:172] (0xc001c1c2c0) (0xc000f8bcc0) Stream removed, broadcasting: 1 I0526 12:01:17.998106 6 log.go:172] (0xc001c1c2c0) (0xc001194000) Stream removed, broadcasting: 3 I0526 12:01:17.998112 6 log.go:172] (0xc001c1c2c0) (0xc0016a6960) Stream removed, broadcasting: 5 May 26 12:01:17.998: INFO: Found all expected endpoints: [netserver-0] May 26 12:01:17.999: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.148:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-ctrbr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 12:01:17.999: INFO: >>> kubeConfig: /root/.kube/config I0526 12:01:18.020706 6 log.go:172] (0xc001ad0580) (0xc0016a7040) Create stream I0526 12:01:18.020744 6 log.go:172] (0xc001ad0580) (0xc0016a7040) Stream added, broadcasting: 1 I0526 12:01:18.022406 6 log.go:172] (0xc001ad0580) Reply frame received for 1 I0526 12:01:18.022446 6 log.go:172] (0xc001ad0580) (0xc0016a7180) Create stream I0526 12:01:18.022465 6 log.go:172] (0xc001ad0580) (0xc0016a7180) Stream added, broadcasting: 3 I0526 12:01:18.023028 6 log.go:172] (0xc001ad0580) Reply frame received for 3 I0526 12:01:18.023041 6 log.go:172] (0xc001ad0580) (0xc0016a7220) Create stream I0526 12:01:18.023045 6 log.go:172] (0xc001ad0580) (0xc0016a7220) Stream added, broadcasting: 5 I0526 12:01:18.023596 6 log.go:172] (0xc001ad0580) Reply frame received for 5 I0526 12:01:18.342351 6 log.go:172] (0xc001ad0580) Data frame received for 3 I0526 12:01:18.342381 6 log.go:172] (0xc0016a7180) (3) Data frame handling I0526 12:01:18.342397 6 log.go:172] (0xc0016a7180) (3) Data frame sent I0526 12:01:18.342401 6 log.go:172] (0xc001ad0580) Data frame received for 3 I0526 12:01:18.342516 6 log.go:172] (0xc0016a7180) (3) Data frame handling I0526 12:01:18.342629 6 log.go:172] (0xc001ad0580) Data frame received for 5 I0526 12:01:18.342635 6 log.go:172] (0xc0016a7220) (5) Data frame handling I0526 12:01:18.343592 6 log.go:172] (0xc001ad0580) Data frame received for 1 I0526 12:01:18.343605 6 log.go:172] (0xc0016a7040) (1) Data frame handling I0526 12:01:18.343621 6 log.go:172] (0xc0016a7040) (1) Data frame sent I0526 12:01:18.343630 6 log.go:172] (0xc001ad0580) (0xc0016a7040) Stream removed, broadcasting: 1 I0526 12:01:18.343636 6 log.go:172] (0xc001ad0580) Go away received I0526 12:01:18.343836 6 log.go:172] (0xc001ad0580) (0xc0016a7040) Stream removed, broadcasting: 1 I0526 12:01:18.343870 6 log.go:172] (0xc001ad0580) (0xc0016a7180) Stream removed, broadcasting: 3 I0526 12:01:18.343884 6 log.go:172] (0xc001ad0580) (0xc0016a7220) Stream removed, broadcasting: 5 May 26 12:01:18.343: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:01:18.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-ctrbr" for this suite. May 26 12:01:40.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:01:40.418: INFO: namespace: e2e-tests-pod-network-test-ctrbr, resource: bindings, ignored listing per whitelist May 26 12:01:40.439: INFO: namespace e2e-tests-pod-network-test-ctrbr deletion completed in 22.092218831s • [SLOW TEST:63.087 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:01:40.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 26 12:01:40.599: INFO: Waiting up to 5m0s for pod "var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018" in namespace "e2e-tests-var-expansion-cbnvz" to be "success or failure" May 26 12:01:40.646: INFO: Pod "var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 47.084088ms May 26 12:01:42.650: INFO: Pod "var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051022444s May 26 12:01:44.654: INFO: Pod "var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054882939s May 26 12:01:46.658: INFO: Pod "var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058607414s May 26 12:01:48.661: INFO: Pod "var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061822058s May 26 12:01:50.664: INFO: Pod "var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065285691s May 26 12:01:52.668: INFO: Pod "var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.068710926s STEP: Saw pod success May 26 12:01:52.668: INFO: Pod "var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:01:52.670: INFO: Trying to get logs from node hunter-worker pod var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018 container dapi-container: STEP: delete the pod May 26 12:01:52.688: INFO: Waiting for pod var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018 to disappear May 26 12:01:52.692: INFO: Pod var-expansion-a8be3363-9f48-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:01:52.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-cbnvz" for this suite. May 26 12:01:58.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:01:58.779: INFO: namespace: e2e-tests-var-expansion-cbnvz, resource: bindings, ignored listing per whitelist May 26 12:01:58.815: INFO: namespace e2e-tests-var-expansion-cbnvz deletion completed in 6.119517515s • [SLOW TEST:18.376 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:01:58.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 26 12:01:58.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-9jmht' May 26 12:02:01.452: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 26 12:02:01.452: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 26 12:02:03.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9jmht' May 26 12:02:03.580: INFO: stderr: "" May 26 12:02:03.580: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:02:03.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9jmht" for this suite. May 26 12:02:27.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:02:27.687: INFO: namespace: e2e-tests-kubectl-9jmht, resource: bindings, ignored listing per whitelist May 26 12:02:27.687: INFO: namespace e2e-tests-kubectl-9jmht deletion completed in 24.103323198s • [SLOW TEST:28.871 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:02:27.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c4e956cc-9f48-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 12:02:27.876: INFO: Waiting up to 5m0s for pod "pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018" in namespace "e2e-tests-secrets-2bs6r" to be "success or failure" May 26 12:02:27.878: INFO: Pod "pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346811ms May 26 12:02:29.882: INFO: Pod "pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005542247s May 26 12:02:31.884: INFO: Pod "pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008352777s May 26 12:02:33.888: INFO: Pod "pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011763406s May 26 12:02:35.891: INFO: Pod "pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014755864s May 26 12:02:37.894: INFO: Pod "pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.017698141s May 26 12:02:39.897: INFO: Pod "pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.021175341s STEP: Saw pod success May 26 12:02:39.897: INFO: Pod "pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:02:39.900: INFO: Trying to get logs from node hunter-worker pod pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018 container secret-volume-test: STEP: delete the pod May 26 12:02:39.920: INFO: Waiting for pod pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018 to disappear May 26 12:02:39.931: INFO: Pod pod-secrets-c4eb389e-9f48-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:02:39.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2bs6r" for this suite. May 26 12:02:45.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:02:46.008: INFO: namespace: e2e-tests-secrets-2bs6r, resource: bindings, ignored listing per whitelist May 26 12:02:46.041: INFO: namespace e2e-tests-secrets-2bs6r deletion completed in 6.074693126s • [SLOW TEST:18.354 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:02:46.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 26 12:02:46.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 26 12:02:46.315: INFO: stderr: "" May 26 12:02:46.315: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:02:46.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gxkcn" for this suite. May 26 12:02:52.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:02:52.441: INFO: namespace: e2e-tests-kubectl-gxkcn, resource: bindings, ignored listing per whitelist May 26 12:02:52.447: INFO: namespace e2e-tests-kubectl-gxkcn deletion completed in 6.129058392s • [SLOW TEST:6.406 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:02:52.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0526 12:02:58.895871 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 12:02:58.895: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:02:58.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-z8p6s" for this suite. May 26 12:03:04.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:03:04.940: INFO: namespace: e2e-tests-gc-z8p6s, resource: bindings, ignored listing per whitelist May 26 12:03:04.978: INFO: namespace e2e-tests-gc-z8p6s deletion completed in 6.079455106s • [SLOW TEST:12.530 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:03:04.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-j2hws/secret-test-db1b603f-9f48-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 12:03:05.120: INFO: Waiting up to 5m0s for pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018" in namespace "e2e-tests-secrets-j2hws" to be "success or failure" May 26 12:03:05.156: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 35.343248ms May 26 12:03:07.385: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2646461s May 26 12:03:09.388: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267771735s May 26 12:03:11.392: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271196663s May 26 12:03:13.395: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.274872218s May 26 12:03:15.398: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.278051478s May 26 12:03:17.402: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.281358614s May 26 12:03:19.406: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.285383987s May 26 12:03:21.409: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.288879453s May 26 12:03:23.412: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.291649866s May 26 12:03:25.415: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.295048908s May 26 12:03:27.419: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.298510734s May 26 12:03:29.422: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 24.301965797s May 26 12:03:31.425: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.304543445s May 26 12:03:33.428: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.307484978s STEP: Saw pod success May 26 12:03:33.428: INFO: Pod "pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:03:33.430: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018 container env-test: STEP: delete the pod May 26 12:03:33.461: INFO: Waiting for pod pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018 to disappear May 26 12:03:33.477: INFO: Pod pod-configmaps-db1c80b2-9f48-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:03:33.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-j2hws" for this suite. May 26 12:03:39.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:03:55.156: INFO: namespace: e2e-tests-secrets-j2hws, resource: bindings, ignored listing per whitelist May 26 12:03:55.156: INFO: namespace e2e-tests-secrets-j2hws deletion completed in 21.676509779s • [SLOW TEST:50.178 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:03:55.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 26 12:03:55.245: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:03:55.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tzqkk" for this suite. May 26 12:04:01.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:04:01.539: INFO: namespace: e2e-tests-kubectl-tzqkk, resource: bindings, ignored listing per whitelist May 26 12:04:01.578: INFO: namespace e2e-tests-kubectl-tzqkk deletion completed in 6.216215659s • [SLOW TEST:6.421 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:04:01.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-47drf in namespace e2e-tests-proxy-9blwg I0526 12:04:01.723815 6 runners.go:184] Created replication controller with name: proxy-service-47drf, namespace: e2e-tests-proxy-9blwg, replica count: 1 I0526 12:04:02.774188 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:03.774394 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:04.774582 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:05.774749 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:06.774962 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:07.775217 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:08.775444 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:09.775650 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:10.775813 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:11.776010 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:12.776226 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:04:13.776392 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 12:04:14.776598 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 12:04:15.776804 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 12:04:16.777005 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 12:04:17.777320 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 12:04:18.777528 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 12:04:19.777697 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0526 12:04:20.777860 6 runners.go:184] proxy-service-47drf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 12:04:20.780: INFO: setup took 19.098043702s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 26 12:04:20.786: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9blwg/pods/proxy-service-47drf-2kk9j:160/proxy/: foo (200; 5.362733ms) May 26 12:04:20.786: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9blwg/pods/proxy-service-47drf-2kk9j:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:04:37.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-4wfnq" for this suite. May 26 12:04:43.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:04:43.664: INFO: namespace: e2e-tests-services-4wfnq, resource: bindings, ignored listing per whitelist May 26 12:04:43.715: INFO: namespace e2e-tests-services-4wfnq deletion completed in 6.071229968s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.239 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:04:43.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 26 12:04:56.335: INFO: Successfully updated pod "annotationupdate15f0b0c2-9f49-11ea-b1d1-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:05:00.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qqmq6" for this suite. May 26 12:05:22.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:05:22.484: INFO: namespace: e2e-tests-downward-api-qqmq6, resource: bindings, ignored listing per whitelist May 26 12:05:22.521: INFO: namespace e2e-tests-downward-api-qqmq6 deletion completed in 22.11058743s • [SLOW TEST:38.806 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:05:22.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 26 12:05:22.638: INFO: namespace e2e-tests-kubectl-2646m May 26 12:05:22.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2646m' May 26 12:05:22.900: INFO: stderr: "" May 26 12:05:22.900: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 26 12:05:23.903: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:23.903: INFO: Found 0 / 1 May 26 12:05:24.904: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:24.904: INFO: Found 0 / 1 May 26 12:05:25.904: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:25.904: INFO: Found 0 / 1 May 26 12:05:26.904: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:26.904: INFO: Found 0 / 1 May 26 12:05:27.904: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:27.904: INFO: Found 0 / 1 May 26 12:05:28.904: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:28.904: INFO: Found 0 / 1 May 26 12:05:29.905: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:29.905: INFO: Found 0 / 1 May 26 12:05:30.904: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:30.904: INFO: Found 0 / 1 May 26 12:05:31.942: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:31.942: INFO: Found 0 / 1 May 26 12:05:32.904: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:32.904: INFO: Found 0 / 1 May 26 12:05:33.904: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:33.904: INFO: Found 1 / 1 May 26 12:05:33.904: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 26 12:05:33.907: INFO: Selector matched 1 pods for map[app:redis] May 26 12:05:33.907: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 26 12:05:33.907: INFO: wait on redis-master startup in e2e-tests-kubectl-2646m May 26 12:05:33.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ztkv9 redis-master --namespace=e2e-tests-kubectl-2646m' May 26 12:05:34.001: INFO: stderr: "" May 26 12:05:34.001: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 26 May 12:05:33.251 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 May 12:05:33.251 # Server started, Redis version 3.2.12\n1:M 26 May 12:05:33.252 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 May 12:05:33.252 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 26 12:05:34.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-2646m' May 26 12:05:34.184: INFO: stderr: "" May 26 12:05:34.184: INFO: stdout: "service/rm2 exposed\n" May 26 12:05:34.196: INFO: Service rm2 in namespace e2e-tests-kubectl-2646m found. STEP: exposing service May 26 12:05:36.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-2646m' May 26 12:05:36.366: INFO: stderr: "" May 26 12:05:36.366: INFO: stdout: "service/rm3 exposed\n" May 26 12:05:36.402: INFO: Service rm3 in namespace e2e-tests-kubectl-2646m found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:05:38.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2646m" for this suite. May 26 12:06:00.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:06:00.473: INFO: namespace: e2e-tests-kubectl-2646m, resource: bindings, ignored listing per whitelist May 26 12:06:00.511: INFO: namespace e2e-tests-kubectl-2646m deletion completed in 22.100287417s • [SLOW TEST:37.990 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:06:00.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:06:00.572: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-tsmcf" to be "success or failure" May 26 12:06:00.598: INFO: Pod "downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.407839ms May 26 12:06:02.601: INFO: Pod "downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028315348s May 26 12:06:05.042: INFO: Pod "downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469111023s May 26 12:06:07.045: INFO: Pod "downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.472450554s May 26 12:06:09.048: INFO: Pod "downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.475266306s May 26 12:06:11.052: INFO: Pod "downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.479157298s STEP: Saw pod success May 26 12:06:11.052: INFO: Pod "downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:06:11.054: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:06:11.076: INFO: Waiting for pod downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018 to disappear May 26 12:06:11.137: INFO: Pod downwardapi-volume-43b66a12-9f49-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:06:11.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tsmcf" for this suite. May 26 12:06:17.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:06:17.169: INFO: namespace: e2e-tests-downward-api-tsmcf, resource: bindings, ignored listing per whitelist May 26 12:06:17.216: INFO: namespace e2e-tests-downward-api-tsmcf deletion completed in 6.07531088s • [SLOW TEST:16.704 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:06:17.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 26 12:06:17.291: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 12:06:17.307: INFO: Waiting for terminating namespaces to be deleted... May 26 12:06:17.309: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 26 12:06:17.314: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 26 12:06:17.314: INFO: Container kube-proxy ready: true, restart count 0 May 26 12:06:17.314: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 26 12:06:17.314: INFO: Container kindnet-cni ready: true, restart count 0 May 26 12:06:17.314: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 26 12:06:17.314: INFO: Container coredns ready: true, restart count 0 May 26 12:06:17.314: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 26 12:06:17.318: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 26 12:06:17.318: INFO: Container kindnet-cni ready: true, restart count 0 May 26 12:06:17.318: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 26 12:06:17.318: INFO: Container coredns ready: true, restart count 0 May 26 12:06:17.318: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 26 12:06:17.318: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-53e61948-9f49-11ea-b1d1-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-53e61948-9f49-11ea-b1d1-0242ac110018 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-53e61948-9f49-11ea-b1d1-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:06:37.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-5m54h" for this suite. May 26 12:06:56.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:06:56.144: INFO: namespace: e2e-tests-sched-pred-5m54h, resource: bindings, ignored listing per whitelist May 26 12:06:56.164: INFO: namespace e2e-tests-sched-pred-5m54h deletion completed in 18.284745095s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:38.948 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:06:56.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-64eb3ea7-9f49-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 12:06:56.300: INFO: Waiting up to 5m0s for pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018" in namespace "e2e-tests-secrets-9gnwb" to be "success or failure" May 26 12:06:56.320: INFO: Pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.296055ms May 26 12:06:58.323: INFO: Pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02331811s May 26 12:07:00.327: INFO: Pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026573801s May 26 12:07:02.330: INFO: Pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029522965s May 26 12:07:04.333: INFO: Pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032808384s May 26 12:07:06.336: INFO: Pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.036236062s May 26 12:07:08.340: INFO: Pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.040208265s May 26 12:07:10.343: INFO: Pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 14.043107528s May 26 12:07:12.347: INFO: Pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.046405278s STEP: Saw pod success May 26 12:07:12.347: INFO: Pod "pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:07:12.348: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018 container secret-volume-test: STEP: delete the pod May 26 12:07:12.366: INFO: Waiting for pod pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018 to disappear May 26 12:07:12.395: INFO: Pod pod-secrets-64ed709d-9f49-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:07:12.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9gnwb" for this suite. May 26 12:07:18.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:07:18.435: INFO: namespace: e2e-tests-secrets-9gnwb, resource: bindings, ignored listing per whitelist May 26 12:07:18.486: INFO: namespace e2e-tests-secrets-9gnwb deletion completed in 6.087893353s • [SLOW TEST:22.323 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:07:18.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 26 12:07:18.606: INFO: Waiting up to 5m0s for pod "var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018" in namespace "e2e-tests-var-expansion-f8z4p" to be "success or failure" May 26 12:07:18.610: INFO: Pod "var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213959ms May 26 12:07:20.613: INFO: Pod "var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00731809s May 26 12:07:22.618: INFO: Pod "var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011913778s May 26 12:07:24.625: INFO: Pod "var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019195067s May 26 12:07:26.629: INFO: Pod "var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022855277s May 26 12:07:28.632: INFO: Pod "var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 10.026210271s May 26 12:07:30.635: INFO: Pod "var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.029205223s STEP: Saw pod success May 26 12:07:30.635: INFO: Pod "var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:07:30.637: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018 container dapi-container: STEP: delete the pod May 26 12:07:30.912: INFO: Waiting for pod var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018 to disappear May 26 12:07:30.952: INFO: Pod var-expansion-723708d1-9f49-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:07:30.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-f8z4p" for this suite. May 26 12:07:36.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:07:37.000: INFO: namespace: e2e-tests-var-expansion-f8z4p, resource: bindings, ignored listing per whitelist May 26 12:07:37.061: INFO: namespace e2e-tests-var-expansion-f8z4p deletion completed in 6.104943322s • [SLOW TEST:18.574 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:07:37.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-7d4534cf-9f49-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 12:07:37.182: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-8x6qp" to be "success or failure" May 26 12:07:37.192: INFO: Pod "pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.258162ms May 26 12:07:39.194: INFO: Pod "pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012195091s May 26 12:07:41.198: INFO: Pod "pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015651925s May 26 12:07:43.201: INFO: Pod "pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018804787s May 26 12:07:45.205: INFO: Pod "pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022709143s May 26 12:07:47.450: INFO: Pod "pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.268016175s May 26 12:07:49.454: INFO: Pod "pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.27135417s STEP: Saw pod success May 26 12:07:49.454: INFO: Pod "pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:07:49.456: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018 container secret-volume-test: STEP: delete the pod May 26 12:07:49.479: INFO: Waiting for pod pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018 to disappear May 26 12:07:49.521: INFO: Pod pod-projected-secrets-7d45edad-9f49-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:07:49.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8x6qp" for this suite. May 26 12:07:55.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:07:55.646: INFO: namespace: e2e-tests-projected-8x6qp, resource: bindings, ignored listing per whitelist May 26 12:07:55.658: INFO: namespace e2e-tests-projected-8x6qp deletion completed in 6.093193906s • [SLOW TEST:18.597 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:07:55.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-88618570-9f49-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 12:07:55.786: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-q68tt" to be "success or failure" May 26 12:07:55.814: INFO: Pod "pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 27.376574ms May 26 12:07:57.816: INFO: Pod "pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029960496s May 26 12:07:59.820: INFO: Pod "pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033260384s May 26 12:08:01.959: INFO: Pod "pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172783012s May 26 12:08:03.962: INFO: Pod "pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176058489s May 26 12:08:05.966: INFO: Pod "pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179640884s May 26 12:08:07.968: INFO: Pod "pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.181933963s STEP: Saw pod success May 26 12:08:07.968: INFO: Pod "pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:08:07.995: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 26 12:08:08.083: INFO: Waiting for pod pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018 to disappear May 26 12:08:08.115: INFO: Pod pod-projected-configmaps-88622ff6-9f49-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:08:08.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q68tt" for this suite. May 26 12:08:14.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:08:14.160: INFO: namespace: e2e-tests-projected-q68tt, resource: bindings, ignored listing per whitelist May 26 12:08:14.193: INFO: namespace e2e-tests-projected-q68tt deletion completed in 6.075582839s • [SLOW TEST:18.535 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:08:14.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:08:14.335: INFO: Waiting up to 5m0s for pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-8m8kv" to be "success or failure" May 26 12:08:14.348: INFO: Pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.737384ms May 26 12:08:16.350: INFO: Pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015609252s May 26 12:08:18.353: INFO: Pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018631631s May 26 12:08:20.361: INFO: Pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025895003s May 26 12:08:22.364: INFO: Pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028909264s May 26 12:08:24.368: INFO: Pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.032901958s May 26 12:08:26.371: INFO: Pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.036430101s May 26 12:08:28.375: INFO: Pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 14.039647595s May 26 12:08:30.421: INFO: Pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.086009018s STEP: Saw pod success May 26 12:08:30.421: INFO: Pod "downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:08:30.424: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:08:30.475: INFO: Waiting for pod downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018 to disappear May 26 12:08:30.493: INFO: Pod downwardapi-volume-936e4d1a-9f49-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:08:30.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8m8kv" for this suite. May 26 12:08:36.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:08:36.591: INFO: namespace: e2e-tests-downward-api-8m8kv, resource: bindings, ignored listing per whitelist May 26 12:08:36.591: INFO: namespace e2e-tests-downward-api-8m8kv deletion completed in 6.093705729s • [SLOW TEST:22.397 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:08:36.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 26 12:08:36.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6p7zb' May 26 12:08:36.765: INFO: stderr: "" May 26 12:08:36.765: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 26 12:08:36.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6p7zb' May 26 12:08:51.260: INFO: stderr: "" May 26 12:08:51.260: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:08:51.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6p7zb" for this suite. May 26 12:08:57.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:08:57.363: INFO: namespace: e2e-tests-kubectl-6p7zb, resource: bindings, ignored listing per whitelist May 26 12:08:57.377: INFO: namespace e2e-tests-kubectl-6p7zb deletion completed in 6.114312229s • [SLOW TEST:20.786 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:08:57.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6qdtc [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 26 12:08:57.551: INFO: Found 0 stateful pods, waiting for 3 May 26 12:09:07.555: INFO: Found 1 stateful pods, waiting for 3 May 26 12:09:17.637: INFO: Found 2 stateful pods, waiting for 3 May 26 12:09:27.554: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 12:09:27.554: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 12:09:27.554: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 26 12:09:37.555: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 12:09:37.555: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 12:09:37.555: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 26 12:09:37.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6qdtc ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 26 12:09:37.925: INFO: stderr: "I0526 12:09:37.683806 2635 log.go:172] (0xc000772160) (0xc0005c46e0) Create stream\nI0526 12:09:37.683853 2635 log.go:172] (0xc000772160) (0xc0005c46e0) Stream added, broadcasting: 1\nI0526 12:09:37.685925 2635 log.go:172] (0xc000772160) Reply frame received for 1\nI0526 12:09:37.685965 2635 log.go:172] (0xc000772160) (0xc000024c80) Create stream\nI0526 12:09:37.685980 2635 log.go:172] (0xc000772160) (0xc000024c80) Stream added, broadcasting: 3\nI0526 12:09:37.686698 2635 log.go:172] (0xc000772160) Reply frame received for 3\nI0526 12:09:37.686728 2635 log.go:172] (0xc000772160) (0xc0005c4780) Create stream\nI0526 12:09:37.686737 2635 log.go:172] (0xc000772160) (0xc0005c4780) Stream added, broadcasting: 5\nI0526 12:09:37.687348 2635 log.go:172] (0xc000772160) Reply frame received for 5\nI0526 12:09:37.919708 2635 log.go:172] (0xc000772160) Data frame received for 3\nI0526 12:09:37.919741 2635 log.go:172] (0xc000024c80) (3) Data frame handling\nI0526 12:09:37.919770 2635 log.go:172] (0xc000024c80) (3) Data frame sent\nI0526 12:09:37.919784 2635 log.go:172] (0xc000772160) Data frame received for 3\nI0526 12:09:37.919796 2635 log.go:172] (0xc000024c80) (3) Data frame handling\nI0526 12:09:37.919933 2635 log.go:172] (0xc000772160) Data frame received for 5\nI0526 12:09:37.919952 2635 log.go:172] (0xc0005c4780) (5) Data frame handling\nI0526 12:09:37.921455 2635 log.go:172] (0xc000772160) Data frame received for 1\nI0526 12:09:37.921480 2635 log.go:172] (0xc0005c46e0) (1) Data frame handling\nI0526 12:09:37.921497 2635 log.go:172] (0xc0005c46e0) (1) Data frame sent\nI0526 12:09:37.921522 2635 log.go:172] (0xc000772160) (0xc0005c46e0) Stream removed, broadcasting: 1\nI0526 12:09:37.921546 2635 log.go:172] (0xc000772160) Go away received\nI0526 12:09:37.921659 2635 log.go:172] (0xc000772160) (0xc0005c46e0) Stream removed, broadcasting: 1\nI0526 12:09:37.921669 2635 log.go:172] (0xc000772160) (0xc000024c80) Stream removed, broadcasting: 3\nI0526 12:09:37.921673 2635 log.go:172] (0xc000772160) (0xc0005c4780) Stream removed, broadcasting: 5\n" May 26 12:09:37.926: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 26 12:09:37.926: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 26 12:09:47.955: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 26 12:09:57.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6qdtc ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 12:09:58.491: INFO: stderr: "I0526 12:09:58.093626 2657 log.go:172] (0xc0001386e0) (0xc000784640) Create stream\nI0526 12:09:58.093676 2657 log.go:172] (0xc0001386e0) (0xc000784640) Stream added, broadcasting: 1\nI0526 12:09:58.095247 2657 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0526 12:09:58.095272 2657 log.go:172] (0xc0001386e0) (0xc0007846e0) Create stream\nI0526 12:09:58.095279 2657 log.go:172] (0xc0001386e0) (0xc0007846e0) Stream added, broadcasting: 3\nI0526 12:09:58.096093 2657 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0526 12:09:58.096142 2657 log.go:172] (0xc0001386e0) (0xc0007b2b40) Create stream\nI0526 12:09:58.096164 2657 log.go:172] (0xc0001386e0) (0xc0007b2b40) Stream added, broadcasting: 5\nI0526 12:09:58.096808 2657 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0526 12:09:58.484912 2657 log.go:172] (0xc0001386e0) Data frame received for 3\nI0526 12:09:58.484940 2657 log.go:172] (0xc0007846e0) (3) Data frame handling\nI0526 12:09:58.484963 2657 log.go:172] (0xc0007846e0) (3) Data frame sent\nI0526 12:09:58.484975 2657 log.go:172] (0xc0001386e0) Data frame received for 3\nI0526 12:09:58.484984 2657 log.go:172] (0xc0007846e0) (3) Data frame handling\nI0526 12:09:58.485341 2657 log.go:172] (0xc0001386e0) Data frame received for 5\nI0526 12:09:58.485367 2657 log.go:172] (0xc0007b2b40) (5) Data frame handling\nI0526 12:09:58.486310 2657 log.go:172] (0xc0001386e0) Data frame received for 1\nI0526 12:09:58.486337 2657 log.go:172] (0xc000784640) (1) Data frame handling\nI0526 12:09:58.486349 2657 log.go:172] (0xc000784640) (1) Data frame sent\nI0526 12:09:58.486360 2657 log.go:172] (0xc0001386e0) (0xc000784640) Stream removed, broadcasting: 1\nI0526 12:09:58.486486 2657 log.go:172] (0xc0001386e0) Go away received\nI0526 12:09:58.486516 2657 log.go:172] (0xc0001386e0) (0xc000784640) Stream removed, broadcasting: 1\nI0526 12:09:58.486533 2657 log.go:172] (0xc0001386e0) (0xc0007846e0) Stream removed, broadcasting: 3\nI0526 12:09:58.486544 2657 log.go:172] (0xc0001386e0) (0xc0007b2b40) Stream removed, broadcasting: 5\n" May 26 12:09:58.491: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 26 12:09:58.491: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 26 12:10:08.530: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:10:08.530: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:10:08.530: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:10:08.530: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:10:18.553: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:10:18.553: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:10:18.553: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:10:28.536: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:10:28.536: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:10:28.536: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:10:38.537: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:10:38.537: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:10:38.537: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:10:48.536: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:10:48.536: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:10:58.536: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:10:58.536: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 26 12:11:08.534: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update STEP: Rolling back to a previous revision May 26 12:11:18.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6qdtc ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 26 12:11:19.097: INFO: stderr: "I0526 12:11:18.820659 2679 log.go:172] (0xc00015c840) (0xc00072e640) Create stream\nI0526 12:11:18.820714 2679 log.go:172] (0xc00015c840) (0xc00072e640) Stream added, broadcasting: 1\nI0526 12:11:18.823214 2679 log.go:172] (0xc00015c840) Reply frame received for 1\nI0526 12:11:18.823253 2679 log.go:172] (0xc00015c840) (0xc0005f6dc0) Create stream\nI0526 12:11:18.823273 2679 log.go:172] (0xc00015c840) (0xc0005f6dc0) Stream added, broadcasting: 3\nI0526 12:11:18.823930 2679 log.go:172] (0xc00015c840) Reply frame received for 3\nI0526 12:11:18.823959 2679 log.go:172] (0xc00015c840) (0xc0002e4000) Create stream\nI0526 12:11:18.823971 2679 log.go:172] (0xc00015c840) (0xc0002e4000) Stream added, broadcasting: 5\nI0526 12:11:18.824510 2679 log.go:172] (0xc00015c840) Reply frame received for 5\nI0526 12:11:19.090910 2679 log.go:172] (0xc00015c840) Data frame received for 3\nI0526 12:11:19.090935 2679 log.go:172] (0xc0005f6dc0) (3) Data frame handling\nI0526 12:11:19.090948 2679 log.go:172] (0xc0005f6dc0) (3) Data frame sent\nI0526 12:11:19.090957 2679 log.go:172] (0xc00015c840) Data frame received for 3\nI0526 12:11:19.090974 2679 log.go:172] (0xc0005f6dc0) (3) Data frame handling\nI0526 12:11:19.091311 2679 log.go:172] (0xc00015c840) Data frame received for 5\nI0526 12:11:19.091331 2679 log.go:172] (0xc0002e4000) (5) Data frame handling\nI0526 12:11:19.092914 2679 log.go:172] (0xc00015c840) Data frame received for 1\nI0526 12:11:19.092929 2679 log.go:172] (0xc00072e640) (1) Data frame handling\nI0526 12:11:19.092936 2679 log.go:172] (0xc00072e640) (1) Data frame sent\nI0526 12:11:19.093316 2679 log.go:172] (0xc00015c840) (0xc00072e640) Stream removed, broadcasting: 1\nI0526 12:11:19.093460 2679 log.go:172] (0xc00015c840) (0xc00072e640) Stream removed, broadcasting: 1\nI0526 12:11:19.093476 2679 log.go:172] (0xc00015c840) (0xc0005f6dc0) Stream removed, broadcasting: 3\nI0526 12:11:19.093525 2679 log.go:172] (0xc00015c840) Go away received\nI0526 12:11:19.093572 2679 log.go:172] (0xc00015c840) (0xc0002e4000) Stream removed, broadcasting: 5\n" May 26 12:11:19.097: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 26 12:11:19.097: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 26 12:11:29.124: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 26 12:11:39.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6qdtc ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 12:11:39.602: INFO: stderr: "I0526 12:11:39.265447 2702 log.go:172] (0xc00015c840) (0xc000716640) Create stream\nI0526 12:11:39.265485 2702 log.go:172] (0xc00015c840) (0xc000716640) Stream added, broadcasting: 1\nI0526 12:11:39.267975 2702 log.go:172] (0xc00015c840) Reply frame received for 1\nI0526 12:11:39.268016 2702 log.go:172] (0xc00015c840) (0xc0007166e0) Create stream\nI0526 12:11:39.268028 2702 log.go:172] (0xc00015c840) (0xc0007166e0) Stream added, broadcasting: 3\nI0526 12:11:39.268735 2702 log.go:172] (0xc00015c840) Reply frame received for 3\nI0526 12:11:39.268750 2702 log.go:172] (0xc00015c840) (0xc000716780) Create stream\nI0526 12:11:39.268756 2702 log.go:172] (0xc00015c840) (0xc000716780) Stream added, broadcasting: 5\nI0526 12:11:39.269739 2702 log.go:172] (0xc00015c840) Reply frame received for 5\nI0526 12:11:39.599056 2702 log.go:172] (0xc00015c840) Data frame received for 3\nI0526 12:11:39.599078 2702 log.go:172] (0xc0007166e0) (3) Data frame handling\nI0526 12:11:39.599083 2702 log.go:172] (0xc0007166e0) (3) Data frame sent\nI0526 12:11:39.599087 2702 log.go:172] (0xc00015c840) Data frame received for 3\nI0526 12:11:39.599091 2702 log.go:172] (0xc0007166e0) (3) Data frame handling\nI0526 12:11:39.599114 2702 log.go:172] (0xc00015c840) Data frame received for 5\nI0526 12:11:39.599142 2702 log.go:172] (0xc000716780) (5) Data frame handling\nI0526 12:11:39.600099 2702 log.go:172] (0xc00015c840) Data frame received for 1\nI0526 12:11:39.600114 2702 log.go:172] (0xc000716640) (1) Data frame handling\nI0526 12:11:39.600126 2702 log.go:172] (0xc000716640) (1) Data frame sent\nI0526 12:11:39.600178 2702 log.go:172] (0xc00015c840) (0xc000716640) Stream removed, broadcasting: 1\nI0526 12:11:39.600263 2702 log.go:172] (0xc00015c840) (0xc000716640) Stream removed, broadcasting: 1\nI0526 12:11:39.600271 2702 log.go:172] (0xc00015c840) (0xc0007166e0) Stream removed, broadcasting: 3\nI0526 12:11:39.600302 2702 log.go:172] (0xc00015c840) Go away received\nI0526 12:11:39.600343 2702 log.go:172] (0xc00015c840) (0xc000716780) Stream removed, broadcasting: 5\n" May 26 12:11:39.602: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 26 12:11:39.602: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 26 12:11:49.618: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:11:49.618: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 26 12:11:49.618: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 26 12:11:49.618: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 26 12:11:59.624: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:11:59.624: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 26 12:11:59.624: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 26 12:12:09.636: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:12:09.636: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 26 12:12:09.636: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 26 12:12:19.696: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:12:19.697: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 26 12:12:29.664: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update May 26 12:12:29.664: INFO: Waiting for Pod e2e-tests-statefulset-6qdtc/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 26 12:12:39.624: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qdtc/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 26 12:12:49.624: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6qdtc May 26 12:12:49.626: INFO: Scaling statefulset ss2 to 0 May 26 12:13:30.200: INFO: Waiting for statefulset status.replicas updated to 0 May 26 12:13:30.203: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:13:30.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6qdtc" for this suite. May 26 12:13:36.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:13:36.340: INFO: namespace: e2e-tests-statefulset-6qdtc, resource: bindings, ignored listing per whitelist May 26 12:13:36.373: INFO: namespace e2e-tests-statefulset-6qdtc deletion completed in 6.098768221s • [SLOW TEST:278.996 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:13:36.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 26 12:13:36.454: INFO: Waiting up to 5m0s for pod "downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-l8j4f" to be "success or failure" May 26 12:13:36.459: INFO: Pod "downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240265ms May 26 12:13:38.462: INFO: Pod "downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007742027s May 26 12:13:40.466: INFO: Pod "downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011695347s May 26 12:13:42.470: INFO: Pod "downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015277301s May 26 12:13:44.472: INFO: Pod "downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017946474s May 26 12:13:46.476: INFO: Pod "downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021761382s May 26 12:13:48.480: INFO: Pod "downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.025585986s May 26 12:13:50.483: INFO: Pod "downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.028868916s STEP: Saw pod success May 26 12:13:50.483: INFO: Pod "downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:13:50.485: INFO: Trying to get logs from node hunter-worker2 pod downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018 container dapi-container: STEP: delete the pod May 26 12:13:50.522: INFO: Waiting for pod downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018 to disappear May 26 12:13:50.543: INFO: Pod downward-api-536fbf3a-9f4a-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:13:50.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-l8j4f" for this suite. May 26 12:13:56.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:13:56.598: INFO: namespace: e2e-tests-downward-api-l8j4f, resource: bindings, ignored listing per whitelist May 26 12:13:56.624: INFO: namespace e2e-tests-downward-api-l8j4f deletion completed in 6.077313751s • [SLOW TEST:20.250 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:13:56.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 26 12:13:56.735: INFO: Waiting up to 5m0s for pod "pod-5f85252c-9f4a-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-ffqvv" to be "success or failure" May 26 12:13:56.757: INFO: Pod "pod-5f85252c-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.571702ms May 26 12:13:58.760: INFO: Pod "pod-5f85252c-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025475758s May 26 12:14:00.764: INFO: Pod "pod-5f85252c-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0295297s May 26 12:14:02.768: INFO: Pod "pod-5f85252c-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0330052s May 26 12:14:04.771: INFO: Pod "pod-5f85252c-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036488865s May 26 12:14:06.775: INFO: Pod "pod-5f85252c-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.040051309s May 26 12:14:08.778: INFO: Pod "pod-5f85252c-9f4a-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.043065707s May 26 12:14:10.781: INFO: Pod "pod-5f85252c-9f4a-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.045996757s STEP: Saw pod success May 26 12:14:10.781: INFO: Pod "pod-5f85252c-9f4a-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:14:10.783: INFO: Trying to get logs from node hunter-worker2 pod pod-5f85252c-9f4a-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 12:14:10.930: INFO: Waiting for pod pod-5f85252c-9f4a-11ea-b1d1-0242ac110018 to disappear May 26 12:14:10.937: INFO: Pod pod-5f85252c-9f4a-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:14:10.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ffqvv" for this suite. May 26 12:14:16.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:14:16.976: INFO: namespace: e2e-tests-emptydir-ffqvv, resource: bindings, ignored listing per whitelist May 26 12:14:17.012: INFO: namespace e2e-tests-emptydir-ffqvv deletion completed in 6.072265643s • [SLOW TEST:20.388 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:14:17.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-5vzh STEP: Creating a pod to test atomic-volume-subpath May 26 12:14:17.138: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5vzh" in namespace "e2e-tests-subpath-pkqlj" to be "success or failure" May 26 12:14:17.152: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.256538ms May 26 12:14:19.156: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017781843s May 26 12:14:21.159: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020991383s May 26 12:14:23.162: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023633959s May 26 12:14:25.165: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0272503s May 26 12:14:27.169: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031165748s May 26 12:14:29.397: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.259147922s May 26 12:14:31.401: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.263137256s May 26 12:14:33.403: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Pending", Reason="", readiness=false. Elapsed: 16.265572756s May 26 12:14:35.406: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Running", Reason="", readiness=true. Elapsed: 18.268626379s May 26 12:14:37.410: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Running", Reason="", readiness=false. Elapsed: 20.27236413s May 26 12:14:39.413: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Running", Reason="", readiness=false. Elapsed: 22.275236158s May 26 12:14:41.416: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Running", Reason="", readiness=false. Elapsed: 24.278246085s May 26 12:14:43.419: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Running", Reason="", readiness=false. Elapsed: 26.281379149s May 26 12:14:45.423: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Running", Reason="", readiness=false. Elapsed: 28.284967117s May 26 12:14:47.427: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Running", Reason="", readiness=false. Elapsed: 30.288854172s May 26 12:14:49.439: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Running", Reason="", readiness=false. Elapsed: 32.300843131s May 26 12:14:51.443: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Running", Reason="", readiness=false. Elapsed: 34.304704965s May 26 12:14:53.446: INFO: Pod "pod-subpath-test-configmap-5vzh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.307884866s STEP: Saw pod success May 26 12:14:53.446: INFO: Pod "pod-subpath-test-configmap-5vzh" satisfied condition "success or failure" May 26 12:14:53.448: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-5vzh container test-container-subpath-configmap-5vzh: STEP: delete the pod May 26 12:14:53.491: INFO: Waiting for pod pod-subpath-test-configmap-5vzh to disappear May 26 12:14:53.527: INFO: Pod pod-subpath-test-configmap-5vzh no longer exists STEP: Deleting pod pod-subpath-test-configmap-5vzh May 26 12:14:53.527: INFO: Deleting pod "pod-subpath-test-configmap-5vzh" in namespace "e2e-tests-subpath-pkqlj" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:14:53.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-pkqlj" for this suite. May 26 12:14:59.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:14:59.609: INFO: namespace: e2e-tests-subpath-pkqlj, resource: bindings, ignored listing per whitelist May 26 12:14:59.640: INFO: namespace e2e-tests-subpath-pkqlj deletion completed in 6.080714928s • [SLOW TEST:42.628 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:14:59.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 12:14:59.838: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.379752ms) May 26 12:14:59.840: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.25353ms) May 26 12:14:59.849: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 9.360478ms) May 26 12:14:59.851: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.145147ms) May 26 12:14:59.853: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.629739ms) May 26 12:14:59.855: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.699819ms) May 26 12:14:59.856: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.37714ms) May 26 12:14:59.858: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.513363ms) May 26 12:14:59.859: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.645036ms) May 26 12:14:59.861: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.754169ms) May 26 12:14:59.863: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.916576ms) May 26 12:14:59.865: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.901644ms) May 26 12:14:59.867: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.890975ms) May 26 12:14:59.894: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 27.114562ms) May 26 12:14:59.897: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.148857ms) May 26 12:14:59.900: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.356838ms) May 26 12:14:59.902: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.35373ms) May 26 12:14:59.905: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.044107ms) May 26 12:14:59.908: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.610344ms) May 26 12:14:59.910: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.55868ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:14:59.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-v8vx5" for this suite. May 26 12:15:05.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:15:05.938: INFO: namespace: e2e-tests-proxy-v8vx5, resource: bindings, ignored listing per whitelist May 26 12:15:05.981: INFO: namespace e2e-tests-proxy-v8vx5 deletion completed in 6.068288634s • [SLOW TEST:6.341 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:15:05.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 26 12:15:32.211: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:32.260: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:34.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:34.263: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:36.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:36.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:38.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:38.263: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:40.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:40.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:42.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:42.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:44.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:44.263: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:46.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:46.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:48.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:48.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:50.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:50.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:52.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:52.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:54.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:54.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:56.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:56.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:15:58.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:15:58.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:16:00.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:16:00.264: INFO: Pod pod-with-prestop-exec-hook still exists May 26 12:16:02.260: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 12:16:02.264: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:16:02.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-774bs" for this suite. May 26 12:16:24.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:16:24.304: INFO: namespace: e2e-tests-container-lifecycle-hook-774bs, resource: bindings, ignored listing per whitelist May 26 12:16:24.347: INFO: namespace e2e-tests-container-lifecycle-hook-774bs deletion completed in 22.073797679s • [SLOW TEST:78.366 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:16:24.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:16:24.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-lqstv" to be "success or failure" May 26 12:16:24.476: INFO: Pod "downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.896485ms May 26 12:16:26.479: INFO: Pod "downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019520478s May 26 12:16:28.483: INFO: Pod "downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022957856s May 26 12:16:30.486: INFO: Pod "downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026563914s May 26 12:16:32.490: INFO: Pod "downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029954479s May 26 12:16:34.493: INFO: Pod "downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03342401s May 26 12:16:36.496: INFO: Pod "downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.036152978s May 26 12:16:38.500: INFO: Pod "downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.039785766s STEP: Saw pod success May 26 12:16:38.500: INFO: Pod "downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:16:38.502: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:16:38.531: INFO: Waiting for pod downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018 to disappear May 26 12:16:38.557: INFO: Pod downwardapi-volume-b790ede9-9f4a-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:16:38.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lqstv" for this suite. May 26 12:16:44.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:16:44.649: INFO: namespace: e2e-tests-downward-api-lqstv, resource: bindings, ignored listing per whitelist May 26 12:16:44.694: INFO: namespace e2e-tests-downward-api-lqstv deletion completed in 6.079421268s • [SLOW TEST:20.347 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:16:44.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6qnqm STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 12:16:44.776: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 26 12:17:30.882: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.107 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6qnqm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 12:17:30.882: INFO: >>> kubeConfig: /root/.kube/config I0526 12:17:30.915761 6 log.go:172] (0xc000587810) (0xc000eea500) Create stream I0526 12:17:30.915782 6 log.go:172] (0xc000587810) (0xc000eea500) Stream added, broadcasting: 1 I0526 12:17:30.917578 6 log.go:172] (0xc000587810) Reply frame received for 1 I0526 12:17:30.917643 6 log.go:172] (0xc000587810) (0xc001762000) Create stream I0526 12:17:30.917670 6 log.go:172] (0xc000587810) (0xc001762000) Stream added, broadcasting: 3 I0526 12:17:30.918348 6 log.go:172] (0xc000587810) Reply frame received for 3 I0526 12:17:30.918372 6 log.go:172] (0xc000587810) (0xc000eea780) Create stream I0526 12:17:30.918382 6 log.go:172] (0xc000587810) (0xc000eea780) Stream added, broadcasting: 5 I0526 12:17:30.919907 6 log.go:172] (0xc000587810) Reply frame received for 5 I0526 12:17:32.252102 6 log.go:172] (0xc000587810) Data frame received for 3 I0526 12:17:32.252131 6 log.go:172] (0xc001762000) (3) Data frame handling I0526 12:17:32.252151 6 log.go:172] (0xc001762000) (3) Data frame sent I0526 12:17:32.252272 6 log.go:172] (0xc000587810) Data frame received for 3 I0526 12:17:32.252282 6 log.go:172] (0xc001762000) (3) Data frame handling I0526 12:17:32.252448 6 log.go:172] (0xc000587810) Data frame received for 5 I0526 12:17:32.252484 6 log.go:172] (0xc000eea780) (5) Data frame handling I0526 12:17:32.254518 6 log.go:172] (0xc000587810) Data frame received for 1 I0526 12:17:32.254541 6 log.go:172] (0xc000eea500) (1) Data frame handling I0526 12:17:32.254568 6 log.go:172] (0xc000eea500) (1) Data frame sent I0526 12:17:32.254592 6 log.go:172] (0xc000587810) (0xc000eea500) Stream removed, broadcasting: 1 I0526 12:17:32.254697 6 log.go:172] (0xc000587810) Go away received I0526 12:17:32.254758 6 log.go:172] (0xc000587810) (0xc000eea500) Stream removed, broadcasting: 1 I0526 12:17:32.254799 6 log.go:172] (0xc000587810) (0xc001762000) Stream removed, broadcasting: 3 I0526 12:17:32.254816 6 log.go:172] (0xc000587810) (0xc000eea780) Stream removed, broadcasting: 5 May 26 12:17:32.254: INFO: Found all expected endpoints: [netserver-0] May 26 12:17:32.257: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.170 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6qnqm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 12:17:32.257: INFO: >>> kubeConfig: /root/.kube/config I0526 12:17:32.289241 6 log.go:172] (0xc000587d90) (0xc000eeb900) Create stream I0526 12:17:32.289271 6 log.go:172] (0xc000587d90) (0xc000eeb900) Stream added, broadcasting: 1 I0526 12:17:32.290300 6 log.go:172] (0xc000587d90) Reply frame received for 1 I0526 12:17:32.290327 6 log.go:172] (0xc000587d90) (0xc000eeba40) Create stream I0526 12:17:32.290335 6 log.go:172] (0xc000587d90) (0xc000eeba40) Stream added, broadcasting: 3 I0526 12:17:32.290861 6 log.go:172] (0xc000587d90) Reply frame received for 3 I0526 12:17:32.290883 6 log.go:172] (0xc000587d90) (0xc0014bc000) Create stream I0526 12:17:32.290892 6 log.go:172] (0xc000587d90) (0xc0014bc000) Stream added, broadcasting: 5 I0526 12:17:32.291346 6 log.go:172] (0xc000587d90) Reply frame received for 5 I0526 12:17:33.632031 6 log.go:172] (0xc000587d90) Data frame received for 3 I0526 12:17:33.632073 6 log.go:172] (0xc000eeba40) (3) Data frame handling I0526 12:17:33.632107 6 log.go:172] (0xc000eeba40) (3) Data frame sent I0526 12:17:33.632142 6 log.go:172] (0xc000587d90) Data frame received for 3 I0526 12:17:33.632161 6 log.go:172] (0xc000eeba40) (3) Data frame handling I0526 12:17:33.632232 6 log.go:172] (0xc000587d90) Data frame received for 5 I0526 12:17:33.632254 6 log.go:172] (0xc0014bc000) (5) Data frame handling I0526 12:17:33.634293 6 log.go:172] (0xc000587d90) Data frame received for 1 I0526 12:17:33.634330 6 log.go:172] (0xc000eeb900) (1) Data frame handling I0526 12:17:33.634349 6 log.go:172] (0xc000eeb900) (1) Data frame sent I0526 12:17:33.634430 6 log.go:172] (0xc000587d90) (0xc000eeb900) Stream removed, broadcasting: 1 I0526 12:17:33.634479 6 log.go:172] (0xc000587d90) Go away received I0526 12:17:33.634627 6 log.go:172] (0xc000587d90) (0xc000eeb900) Stream removed, broadcasting: 1 I0526 12:17:33.634701 6 log.go:172] (0xc000587d90) (0xc000eeba40) Stream removed, broadcasting: 3 I0526 12:17:33.634734 6 log.go:172] (0xc000587d90) (0xc0014bc000) Stream removed, broadcasting: 5 May 26 12:17:33.634: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:17:33.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-6qnqm" for this suite. May 26 12:17:57.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:17:57.684: INFO: namespace: e2e-tests-pod-network-test-6qnqm, resource: bindings, ignored listing per whitelist May 26 12:17:57.699: INFO: namespace e2e-tests-pod-network-test-6qnqm deletion completed in 24.060569843s • [SLOW TEST:73.005 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:17:57.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 26 12:18:10.388: INFO: Successfully updated pod "pod-update-ef3a2bbd-9f4a-11ea-b1d1-0242ac110018" STEP: verifying the updated pod is in kubernetes May 26 12:18:10.397: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:18:10.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-thjbh" for this suite. May 26 12:18:32.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:18:32.502: INFO: namespace: e2e-tests-pods-thjbh, resource: bindings, ignored listing per whitelist May 26 12:18:32.541: INFO: namespace e2e-tests-pods-thjbh deletion completed in 22.141833503s • [SLOW TEST:34.842 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:18:32.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:18:33.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-g72zr" to be "success or failure" May 26 12:18:33.899: INFO: Pod "downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.104209ms May 26 12:18:35.926: INFO: Pod "downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049455597s May 26 12:18:37.929: INFO: Pod "downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052159989s May 26 12:18:39.951: INFO: Pod "downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073830059s May 26 12:18:41.954: INFO: Pod "downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076905042s May 26 12:18:43.956: INFO: Pod "downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 10.079267751s May 26 12:18:45.959: INFO: Pod "downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.082534948s STEP: Saw pod success May 26 12:18:45.960: INFO: Pod "downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:18:45.962: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:18:46.106: INFO: Waiting for pod downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018 to disappear May 26 12:18:46.126: INFO: Pod downwardapi-volume-04620b65-9f4b-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:18:46.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g72zr" for this suite. May 26 12:18:52.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:18:52.231: INFO: namespace: e2e-tests-projected-g72zr, resource: bindings, ignored listing per whitelist May 26 12:18:52.239: INFO: namespace e2e-tests-projected-g72zr deletion completed in 6.110630841s • [SLOW TEST:19.697 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:18:52.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-0fbedc66-9f4b-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 12:18:52.398: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-s5tcm" to be "success or failure" May 26 12:18:52.421: INFO: Pod "pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.346981ms May 26 12:18:54.424: INFO: Pod "pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026436291s May 26 12:18:56.428: INFO: Pod "pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030543067s May 26 12:18:58.431: INFO: Pod "pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033535952s May 26 12:19:00.460: INFO: Pod "pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062264093s May 26 12:19:02.463: INFO: Pod "pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 10.065613368s May 26 12:19:04.466: INFO: Pod "pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.068231523s STEP: Saw pod success May 26 12:19:04.466: INFO: Pod "pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:19:04.468: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 26 12:19:04.501: INFO: Waiting for pod pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018 to disappear May 26 12:19:04.512: INFO: Pod pod-projected-secrets-0fc1230f-9f4b-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:19:04.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s5tcm" for this suite. May 26 12:19:10.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:19:10.570: INFO: namespace: e2e-tests-projected-s5tcm, resource: bindings, ignored listing per whitelist May 26 12:19:10.597: INFO: namespace e2e-tests-projected-s5tcm deletion completed in 6.081873697s • [SLOW TEST:18.358 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:19:10.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-1aa86dc9-9f4b-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 12:19:10.693: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-jcrrd" to be "success or failure" May 26 12:19:10.708: INFO: Pod "pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.573769ms May 26 12:19:12.711: INFO: Pod "pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018449124s May 26 12:19:14.715: INFO: Pod "pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021992151s May 26 12:19:16.814: INFO: Pod "pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12088194s May 26 12:19:18.817: INFO: Pod "pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12383131s May 26 12:19:20.820: INFO: Pod "pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.126778808s May 26 12:19:22.833: INFO: Pod "pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.140451289s May 26 12:19:24.838: INFO: Pod "pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.14537282s STEP: Saw pod success May 26 12:19:24.838: INFO: Pod "pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:19:24.861: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 26 12:19:24.942: INFO: Waiting for pod pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018 to disappear May 26 12:19:24.953: INFO: Pod pod-projected-configmaps-1aa8eb50-9f4b-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:19:24.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jcrrd" for this suite. May 26 12:19:30.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:19:31.031: INFO: namespace: e2e-tests-projected-jcrrd, resource: bindings, ignored listing per whitelist May 26 12:19:31.035: INFO: namespace e2e-tests-projected-jcrrd deletion completed in 6.079522109s • [SLOW TEST:20.438 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:19:31.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-rv2nq May 26 12:19:41.179: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-rv2nq STEP: checking the pod's current state and verifying that restartCount is present May 26 12:19:41.182: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:23:42.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rv2nq" for this suite. May 26 12:23:48.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:23:48.161: INFO: namespace: e2e-tests-container-probe-rv2nq, resource: bindings, ignored listing per whitelist May 26 12:23:48.202: INFO: namespace e2e-tests-container-probe-rv2nq deletion completed in 6.084129124s • [SLOW TEST:257.166 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:23:48.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c028ceea-9f4b-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 12:23:48.363: INFO: Waiting up to 5m0s for pod "pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018" in namespace "e2e-tests-configmap-kvcrz" to be "success or failure" May 26 12:23:48.367: INFO: Pod "pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.625892ms May 26 12:23:50.370: INFO: Pod "pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00740441s May 26 12:23:52.374: INFO: Pod "pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011518489s May 26 12:23:54.378: INFO: Pod "pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015530269s May 26 12:23:56.381: INFO: Pod "pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01860154s May 26 12:23:58.384: INFO: Pod "pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021514043s May 26 12:24:00.388: INFO: Pod "pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025591952s STEP: Saw pod success May 26 12:24:00.388: INFO: Pod "pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:24:00.391: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018 container configmap-volume-test: STEP: delete the pod May 26 12:24:00.425: INFO: Waiting for pod pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018 to disappear May 26 12:24:00.439: INFO: Pod pod-configmaps-c029efed-9f4b-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:24:00.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kvcrz" for this suite. May 26 12:24:08.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:24:08.494: INFO: namespace: e2e-tests-configmap-kvcrz, resource: bindings, ignored listing per whitelist May 26 12:24:08.518: INFO: namespace e2e-tests-configmap-kvcrz deletion completed in 8.075630677s • [SLOW TEST:20.316 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:24:08.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 26 12:24:08.681: INFO: Waiting up to 5m0s for pod "pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-fcjv5" to be "success or failure" May 26 12:24:08.770: INFO: Pod "pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 88.697252ms May 26 12:24:10.774: INFO: Pod "pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092402727s May 26 12:24:12.777: INFO: Pod "pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095816224s May 26 12:24:14.781: INFO: Pod "pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099616736s May 26 12:24:16.784: INFO: Pod "pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102871151s May 26 12:24:18.787: INFO: Pod "pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.106061208s May 26 12:24:20.791: INFO: Pod "pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.1095438s STEP: Saw pod success May 26 12:24:20.791: INFO: Pod "pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:24:20.794: INFO: Trying to get logs from node hunter-worker2 pod pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 12:24:20.826: INFO: Waiting for pod pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018 to disappear May 26 12:24:20.841: INFO: Pod pod-cc446bd4-9f4b-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:24:20.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fcjv5" for this suite. May 26 12:24:26.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:24:26.913: INFO: namespace: e2e-tests-emptydir-fcjv5, resource: bindings, ignored listing per whitelist May 26 12:24:26.925: INFO: namespace e2e-tests-emptydir-fcjv5 deletion completed in 6.081669878s • [SLOW TEST:18.407 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:24:26.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 12:24:27.011: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:24:37.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-79gvp" for this suite. May 26 12:25:27.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:25:27.108: INFO: namespace: e2e-tests-pods-79gvp, resource: bindings, ignored listing per whitelist May 26 12:25:27.154: INFO: namespace e2e-tests-pods-79gvp deletion completed in 50.08358865s • [SLOW TEST:60.229 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:25:27.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 26 12:25:27.246: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-mcf4g" to be "success or failure" May 26 12:25:27.250: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331658ms May 26 12:25:29.253: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007778185s May 26 12:25:31.258: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011935335s May 26 12:25:33.261: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015441183s May 26 12:25:35.264: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01856295s May 26 12:25:37.268: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021878229s May 26 12:25:39.271: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025431469s STEP: Saw pod success May 26 12:25:39.271: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 26 12:25:39.273: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 26 12:25:39.299: INFO: Waiting for pod pod-host-path-test to disappear May 26 12:25:39.304: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:25:39.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-mcf4g" for this suite. May 26 12:25:45.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:25:45.375: INFO: namespace: e2e-tests-hostpath-mcf4g, resource: bindings, ignored listing per whitelist May 26 12:25:45.388: INFO: namespace e2e-tests-hostpath-mcf4g deletion completed in 6.082231899s • [SLOW TEST:18.234 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:25:45.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:25:45.550: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-tvmp8" to be "success or failure" May 26 12:25:45.568: INFO: Pod "downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.289205ms May 26 12:25:47.571: INFO: Pod "downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020415938s May 26 12:25:49.575: INFO: Pod "downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024323592s May 26 12:25:51.578: INFO: Pod "downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027557542s May 26 12:25:53.591: INFO: Pod "downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041092047s May 26 12:25:55.595: INFO: Pod "downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.044670335s May 26 12:25:57.598: INFO: Pod "downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.047335191s STEP: Saw pod success May 26 12:25:57.598: INFO: Pod "downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:25:57.599: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:25:57.632: INFO: Waiting for pod downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018 to disappear May 26 12:25:57.640: INFO: Pod downwardapi-volume-0603a792-9f4c-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:25:57.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tvmp8" for this suite. May 26 12:26:03.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:26:03.682: INFO: namespace: e2e-tests-downward-api-tvmp8, resource: bindings, ignored listing per whitelist May 26 12:26:03.719: INFO: namespace e2e-tests-downward-api-tvmp8 deletion completed in 6.075836908s • [SLOW TEST:18.330 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:26:03.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:26:03.879: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-vgmnl" to be "success or failure" May 26 12:26:03.904: INFO: Pod "downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 24.154672ms May 26 12:26:05.907: INFO: Pod "downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027814622s May 26 12:26:07.910: INFO: Pod "downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030803439s May 26 12:26:09.914: INFO: Pod "downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034788803s May 26 12:26:11.917: INFO: Pod "downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037783862s May 26 12:26:13.919: INFO: Pod "downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 10.039896634s May 26 12:26:15.922: INFO: Pod "downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.043042159s STEP: Saw pod success May 26 12:26:15.922: INFO: Pod "downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:26:15.925: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:26:15.962: INFO: Waiting for pod downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018 to disappear May 26 12:26:15.975: INFO: Pod downwardapi-volume-10e4b40f-9f4c-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:26:15.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vgmnl" for this suite. May 26 12:26:21.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:26:22.048: INFO: namespace: e2e-tests-downward-api-vgmnl, resource: bindings, ignored listing per whitelist May 26 12:26:22.054: INFO: namespace e2e-tests-downward-api-vgmnl deletion completed in 6.075693752s • [SLOW TEST:18.335 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:26:22.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:26:22.157: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-8cjlc" to be "success or failure" May 26 12:26:22.172: INFO: Pod "downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.964287ms May 26 12:26:24.176: INFO: Pod "downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018728764s May 26 12:26:26.180: INFO: Pod "downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02275956s May 26 12:26:28.184: INFO: Pod "downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026226547s May 26 12:26:30.188: INFO: Pod "downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.030067248s May 26 12:26:32.191: INFO: Pod "downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.033876836s May 26 12:26:34.195: INFO: Pod "downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.037536619s STEP: Saw pod success May 26 12:26:34.195: INFO: Pod "downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:26:34.198: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:26:34.229: INFO: Waiting for pod downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018 to disappear May 26 12:26:34.246: INFO: Pod downwardapi-volume-1bd44eb1-9f4c-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:26:34.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8cjlc" for this suite. May 26 12:26:40.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:26:40.301: INFO: namespace: e2e-tests-projected-8cjlc, resource: bindings, ignored listing per whitelist May 26 12:26:40.327: INFO: namespace e2e-tests-projected-8cjlc deletion completed in 6.07757945s • [SLOW TEST:18.273 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:26:40.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6vjxp.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6vjxp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6vjxp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6vjxp.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6vjxp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6vjxp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 12:26:56.541: INFO: DNS probes using e2e-tests-dns-6vjxp/dns-test-26b811c5-9f4c-11ea-b1d1-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:26:56.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-6vjxp" for this suite. May 26 12:27:02.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:27:02.715: INFO: namespace: e2e-tests-dns-6vjxp, resource: bindings, ignored listing per whitelist May 26 12:27:02.726: INFO: namespace e2e-tests-dns-6vjxp deletion completed in 6.102086265s • [SLOW TEST:22.399 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:27:02.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 26 12:27:03.599: INFO: Pod name wrapped-volume-race-347a53be-9f4c-11ea-b1d1-0242ac110018: Found 0 pods out of 5 May 26 12:27:08.606: INFO: Pod name wrapped-volume-race-347a53be-9f4c-11ea-b1d1-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-347a53be-9f4c-11ea-b1d1-0242ac110018 in namespace e2e-tests-emptydir-wrapper-4pwsl, will wait for the garbage collector to delete the pods May 26 12:28:50.713: INFO: Deleting ReplicationController wrapped-volume-race-347a53be-9f4c-11ea-b1d1-0242ac110018 took: 6.769339ms May 26 12:28:50.814: INFO: Terminating ReplicationController wrapped-volume-race-347a53be-9f4c-11ea-b1d1-0242ac110018 pods took: 100.245962ms STEP: Creating RC which spawns configmap-volume pods May 26 12:29:32.373: INFO: Pod name wrapped-volume-race-8d2dd0ef-9f4c-11ea-b1d1-0242ac110018: Found 0 pods out of 5 May 26 12:29:37.380: INFO: Pod name wrapped-volume-race-8d2dd0ef-9f4c-11ea-b1d1-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8d2dd0ef-9f4c-11ea-b1d1-0242ac110018 in namespace e2e-tests-emptydir-wrapper-4pwsl, will wait for the garbage collector to delete the pods May 26 12:31:31.478: INFO: Deleting ReplicationController wrapped-volume-race-8d2dd0ef-9f4c-11ea-b1d1-0242ac110018 took: 27.140753ms May 26 12:31:31.578: INFO: Terminating ReplicationController wrapped-volume-race-8d2dd0ef-9f4c-11ea-b1d1-0242ac110018 pods took: 100.171089ms STEP: Creating RC which spawns configmap-volume pods May 26 12:32:21.801: INFO: Pod name wrapped-volume-race-f220d989-9f4c-11ea-b1d1-0242ac110018: Found 0 pods out of 5 May 26 12:32:26.809: INFO: Pod name wrapped-volume-race-f220d989-9f4c-11ea-b1d1-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f220d989-9f4c-11ea-b1d1-0242ac110018 in namespace e2e-tests-emptydir-wrapper-4pwsl, will wait for the garbage collector to delete the pods May 26 12:34:10.892: INFO: Deleting ReplicationController wrapped-volume-race-f220d989-9f4c-11ea-b1d1-0242ac110018 took: 6.565985ms May 26 12:34:10.993: INFO: Terminating ReplicationController wrapped-volume-race-f220d989-9f4c-11ea-b1d1-0242ac110018 pods took: 100.216041ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:34:52.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4pwsl" for this suite. May 26 12:35:01.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:35:01.108: INFO: namespace: e2e-tests-emptydir-wrapper-4pwsl, resource: bindings, ignored listing per whitelist May 26 12:35:01.143: INFO: namespace e2e-tests-emptydir-wrapper-4pwsl deletion completed in 8.156730416s • [SLOW TEST:478.417 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:35:01.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 26 12:35:13.326: INFO: Pod pod-hostip-513c36e5-9f4d-11ea-b1d1-0242ac110018 has hostIP: 172.17.0.3 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:35:13.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-4w8pv" for this suite. May 26 12:35:35.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:35:35.375: INFO: namespace: e2e-tests-pods-4w8pv, resource: bindings, ignored listing per whitelist May 26 12:35:35.400: INFO: namespace e2e-tests-pods-4w8pv deletion completed in 22.071024063s • [SLOW TEST:34.256 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:35:35.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 12:35:35.521: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 26 12:35:35.528: INFO: Number of nodes with available pods: 0 May 26 12:35:35.528: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 26 12:35:35.627: INFO: Number of nodes with available pods: 0 May 26 12:35:35.627: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:36.884: INFO: Number of nodes with available pods: 0 May 26 12:35:36.884: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:37.631: INFO: Number of nodes with available pods: 0 May 26 12:35:37.631: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:38.631: INFO: Number of nodes with available pods: 0 May 26 12:35:38.631: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:39.631: INFO: Number of nodes with available pods: 0 May 26 12:35:39.631: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:40.630: INFO: Number of nodes with available pods: 0 May 26 12:35:40.630: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:41.631: INFO: Number of nodes with available pods: 0 May 26 12:35:41.631: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:42.631: INFO: Number of nodes with available pods: 0 May 26 12:35:42.631: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:43.631: INFO: Number of nodes with available pods: 0 May 26 12:35:43.631: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:44.632: INFO: Number of nodes with available pods: 0 May 26 12:35:44.632: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:45.714: INFO: Number of nodes with available pods: 0 May 26 12:35:45.714: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:46.631: INFO: Number of nodes with available pods: 0 May 26 12:35:46.631: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:47.631: INFO: Number of nodes with available pods: 0 May 26 12:35:47.631: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:48.631: INFO: Number of nodes with available pods: 1 May 26 12:35:48.631: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 26 12:35:48.702: INFO: Number of nodes with available pods: 1 May 26 12:35:48.702: INFO: Number of running nodes: 0, number of available pods: 1 May 26 12:35:49.705: INFO: Number of nodes with available pods: 0 May 26 12:35:49.706: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 26 12:35:49.735: INFO: Number of nodes with available pods: 0 May 26 12:35:49.735: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:50.739: INFO: Number of nodes with available pods: 0 May 26 12:35:50.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:51.739: INFO: Number of nodes with available pods: 0 May 26 12:35:51.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:52.739: INFO: Number of nodes with available pods: 0 May 26 12:35:52.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:53.738: INFO: Number of nodes with available pods: 0 May 26 12:35:53.738: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:54.739: INFO: Number of nodes with available pods: 0 May 26 12:35:54.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:56.217: INFO: Number of nodes with available pods: 0 May 26 12:35:56.217: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:56.739: INFO: Number of nodes with available pods: 0 May 26 12:35:56.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:57.739: INFO: Number of nodes with available pods: 0 May 26 12:35:57.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:58.738: INFO: Number of nodes with available pods: 0 May 26 12:35:58.738: INFO: Node hunter-worker is running more than one daemon pod May 26 12:35:59.739: INFO: Number of nodes with available pods: 0 May 26 12:35:59.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:36:00.739: INFO: Number of nodes with available pods: 0 May 26 12:36:00.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:36:01.738: INFO: Number of nodes with available pods: 0 May 26 12:36:01.738: INFO: Node hunter-worker is running more than one daemon pod May 26 12:36:02.739: INFO: Number of nodes with available pods: 0 May 26 12:36:02.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:36:03.738: INFO: Number of nodes with available pods: 0 May 26 12:36:03.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:36:04.739: INFO: Number of nodes with available pods: 0 May 26 12:36:04.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:36:05.739: INFO: Number of nodes with available pods: 0 May 26 12:36:05.739: INFO: Node hunter-worker is running more than one daemon pod May 26 12:36:06.739: INFO: Number of nodes with available pods: 1 May 26 12:36:06.739: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6fg7d, will wait for the garbage collector to delete the pods May 26 12:36:06.802: INFO: Deleting DaemonSet.extensions daemon-set took: 5.467812ms May 26 12:36:06.902: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.248658ms May 26 12:36:21.305: INFO: Number of nodes with available pods: 0 May 26 12:36:21.305: INFO: Number of running nodes: 0, number of available pods: 0 May 26 12:36:21.307: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6fg7d/daemonsets","resourceVersion":"12617354"},"items":null} May 26 12:36:21.309: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6fg7d/pods","resourceVersion":"12617354"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:36:21.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6fg7d" for this suite. May 26 12:36:27.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:36:27.408: INFO: namespace: e2e-tests-daemonsets-6fg7d, resource: bindings, ignored listing per whitelist May 26 12:36:27.425: INFO: namespace e2e-tests-daemonsets-6fg7d deletion completed in 6.077417118s • [SLOW TEST:52.026 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:36:27.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:36:27.518: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-6prml" to be "success or failure" May 26 12:36:27.539: INFO: Pod "downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.427689ms May 26 12:36:29.543: INFO: Pod "downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025153683s May 26 12:36:31.546: INFO: Pod "downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028665299s May 26 12:36:33.550: INFO: Pod "downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032369496s May 26 12:36:35.554: INFO: Pod "downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036471175s May 26 12:36:37.557: INFO: Pod "downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03970168s May 26 12:36:39.561: INFO: Pod "downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.043487422s STEP: Saw pod success May 26 12:36:39.561: INFO: Pod "downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:36:39.564: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:36:39.622: INFO: Waiting for pod downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018 to disappear May 26 12:36:39.638: INFO: Pod downwardapi-volume-84a7f508-9f4d-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:36:39.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6prml" for this suite. May 26 12:36:45.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:36:45.689: INFO: namespace: e2e-tests-projected-6prml, resource: bindings, ignored listing per whitelist May 26 12:36:45.718: INFO: namespace e2e-tests-projected-6prml deletion completed in 6.076185786s • [SLOW TEST:18.292 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:36:45.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tlkd5 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-tlkd5;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tlkd5 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-tlkd5;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tlkd5.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-tlkd5.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tlkd5.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tlkd5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-tlkd5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tlkd5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-tlkd5.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-tlkd5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 41.130.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.130.41_udp@PTR;check="$$(dig +tcp +noall +answer +search 41.130.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.130.41_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tlkd5 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-tlkd5;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tlkd5 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-tlkd5.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-tlkd5.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tlkd5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-tlkd5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-tlkd5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-tlkd5.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-tlkd5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 41.130.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.130.41_udp@PTR;check="$$(dig +tcp +noall +answer +search 41.130.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.130.41_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 12:37:03.918: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:03.936: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:03.939: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:03.941: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tlkd5 from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:03.944: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5 from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:03.946: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:03.947: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:03.949: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:03.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:03.964: INFO: Lookups using e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tlkd5 jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5 jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc] May 26 12:37:08.986: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:09.006: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:09.008: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:09.010: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tlkd5 from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:09.012: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5 from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:09.014: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:09.016: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:09.018: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:09.020: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:09.033: INFO: Lookups using e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tlkd5 jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5 jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc] May 26 12:37:13.979: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:13.995: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:13.997: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:13.999: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tlkd5 from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:14.002: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5 from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:14.004: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:14.006: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:14.008: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:14.010: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:14.023: INFO: Lookups using e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tlkd5 jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5 jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc] May 26 12:37:18.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:19.006: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:19.021: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:19.024: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tlkd5 from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:19.028: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5 from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:19.030: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:19.033: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:19.036: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:19.039: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:19.051: INFO: Lookups using e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tlkd5 jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5 jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc] May 26 12:37:23.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:24.006: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:24.009: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:24.011: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tlkd5 from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:24.014: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5 from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:24.016: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:24.019: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:24.022: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:24.025: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc from pod e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018: the server could not find the requested resource (get pods dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018) May 26 12:37:24.041: INFO: Lookups using e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-tlkd5 jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5 jessie_udp@dns-test-service.e2e-tests-dns-tlkd5.svc jessie_tcp@dns-test-service.e2e-tests-dns-tlkd5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-tlkd5.svc] May 26 12:37:29.058: INFO: DNS probes using e2e-tests-dns-tlkd5/dns-test-8f954338-9f4d-11ea-b1d1-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:37:29.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-tlkd5" for this suite. May 26 12:37:35.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:37:35.248: INFO: namespace: e2e-tests-dns-tlkd5, resource: bindings, ignored listing per whitelist May 26 12:37:35.266: INFO: namespace e2e-tests-dns-tlkd5 deletion completed in 6.069008572s • [SLOW TEST:49.548 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:37:35.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:37:51.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8wntp" for this suite. May 26 12:37:57.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:37:57.478: INFO: namespace: e2e-tests-kubelet-test-8wntp, resource: bindings, ignored listing per whitelist May 26 12:37:57.478: INFO: namespace e2e-tests-kubelet-test-8wntp deletion completed in 6.073979535s • [SLOW TEST:22.212 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:37:57.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:37:57.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-fchgq" to be "success or failure" May 26 12:37:57.621: INFO: Pod "downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.33611ms May 26 12:37:59.625: INFO: Pod "downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026079501s May 26 12:38:01.629: INFO: Pod "downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029602833s May 26 12:38:03.632: INFO: Pod "downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033160141s May 26 12:38:05.636: INFO: Pod "downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036633046s May 26 12:38:07.639: INFO: Pod "downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.040159266s May 26 12:38:09.643: INFO: Pod "downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.04395301s STEP: Saw pod success May 26 12:38:09.643: INFO: Pod "downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:38:09.646: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:38:09.666: INFO: Waiting for pod downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018 to disappear May 26 12:38:09.670: INFO: Pod downwardapi-volume-ba583523-9f4d-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:38:09.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fchgq" for this suite. May 26 12:38:15.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:38:15.736: INFO: namespace: e2e-tests-downward-api-fchgq, resource: bindings, ignored listing per whitelist May 26 12:38:15.750: INFO: namespace e2e-tests-downward-api-fchgq deletion completed in 6.076375189s • [SLOW TEST:18.272 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:38:15.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-c540825d-9f4d-11ea-b1d1-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c540825d-9f4d-11ea-b1d1-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:39:51.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-f94mx" for this suite. May 26 12:40:13.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:40:13.123: INFO: namespace: e2e-tests-configmap-f94mx, resource: bindings, ignored listing per whitelist May 26 12:40:13.151: INFO: namespace e2e-tests-configmap-f94mx deletion completed in 22.082153334s • [SLOW TEST:117.401 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:40:13.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0b369a8f-9f4e-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 12:40:13.285: INFO: Waiting up to 5m0s for pod "pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018" in namespace "e2e-tests-secrets-hqjw5" to be "success or failure" May 26 12:40:13.290: INFO: Pod "pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.804404ms May 26 12:40:15.294: INFO: Pod "pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008703863s May 26 12:40:17.297: INFO: Pod "pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012078594s May 26 12:40:19.301: INFO: Pod "pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015694823s May 26 12:40:21.304: INFO: Pod "pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018590426s May 26 12:40:23.311: INFO: Pod "pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025876881s May 26 12:40:25.314: INFO: Pod "pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.028342833s May 26 12:40:27.401: INFO: Pod "pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.116005292s STEP: Saw pod success May 26 12:40:27.401: INFO: Pod "pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:40:27.403: INFO: Trying to get logs from node hunter-worker pod pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018 container secret-env-test: STEP: delete the pod May 26 12:40:27.429: INFO: Waiting for pod pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018 to disappear May 26 12:40:27.445: INFO: Pod pod-secrets-0b392f89-9f4e-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:40:27.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hqjw5" for this suite. May 26 12:40:33.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:40:33.482: INFO: namespace: e2e-tests-secrets-hqjw5, resource: bindings, ignored listing per whitelist May 26 12:40:33.518: INFO: namespace e2e-tests-secrets-hqjw5 deletion completed in 6.070225906s • [SLOW TEST:20.367 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:40:33.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-175d6eff-9f4e-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 12:40:33.668: INFO: Waiting up to 5m0s for pod "pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018" in namespace "e2e-tests-configmap-n2zqd" to be "success or failure" May 26 12:40:33.673: INFO: Pod "pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.127482ms May 26 12:40:35.676: INFO: Pod "pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008416048s May 26 12:40:37.680: INFO: Pod "pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011997842s May 26 12:40:39.683: INFO: Pod "pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014905388s May 26 12:40:41.688: INFO: Pod "pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020811016s May 26 12:40:43.691: INFO: Pod "pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02322542s May 26 12:40:45.694: INFO: Pod "pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.026699486s STEP: Saw pod success May 26 12:40:45.694: INFO: Pod "pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:40:45.697: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018 container configmap-volume-test: STEP: delete the pod May 26 12:40:45.740: INFO: Waiting for pod pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018 to disappear May 26 12:40:45.763: INFO: Pod pod-configmaps-175df198-9f4e-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:40:45.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-n2zqd" for this suite. May 26 12:40:51.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:40:51.809: INFO: namespace: e2e-tests-configmap-n2zqd, resource: bindings, ignored listing per whitelist May 26 12:40:51.853: INFO: namespace e2e-tests-configmap-n2zqd deletion completed in 6.087278032s • [SLOW TEST:18.335 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:40:51.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 12:40:52.186: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 26 12:40:57.193: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 26 12:41:05.199: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 26 12:41:07.203: INFO: Creating deployment "test-rollover-deployment" May 26 12:41:07.218: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 26 12:41:09.223: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 26 12:41:09.229: INFO: Ensure that both replica sets have 1 created replica May 26 12:41:09.233: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 26 12:41:09.237: INFO: Updating deployment test-rollover-deployment May 26 12:41:09.238: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 26 12:41:11.246: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 26 12:41:11.252: INFO: Make sure deployment "test-rollover-deployment" is complete May 26 12:41:11.257: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:11.257: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093669, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:13.277: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:13.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093669, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:15.263: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:15.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093669, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:17.264: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:17.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093669, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:19.263: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:19.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093669, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:21.261: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:21.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093669, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:23.282: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:23.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093681, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:25.272: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:25.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093681, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:27.264: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:27.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093681, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:29.264: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:29.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093681, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:31.286: INFO: all replica sets need to contain the pod-template-hash label May 26 12:41:31.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093681, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726093667, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:41:34.186: INFO: May 26 12:41:34.187: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 26 12:41:34.376: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-bbx24,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bbx24/deployments/test-rollover-deployment,UID:2b5db7d5-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12618236,Generation:2,CreationTimestamp:2020-05-26 12:41:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-26 12:41:07 +0000 UTC 2020-05-26 12:41:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-26 12:41:31 +0000 UTC 2020-05-26 12:41:07 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 26 12:41:34.379: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-bbx24,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bbx24/replicasets/test-rollover-deployment-5b8479fdb6,UID:2c942778-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12618227,Generation:2,CreationTimestamp:2020-05-26 12:41:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2b5db7d5-9f4e-11ea-99e8-0242ac110002 0xc002058ae7 0xc002058ae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 26 12:41:34.379: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 26 12:41:34.380: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-bbx24,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bbx24/replicasets/test-rollover-controller,UID:2241311c-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12618235,Generation:2,CreationTimestamp:2020-05-26 12:40:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2b5db7d5-9f4e-11ea-99e8-0242ac110002 0xc00230ff9f 0xc00230ffb0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 26 12:41:34.380: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-bbx24,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bbx24/replicasets/test-rollover-deployment-58494b7559,UID:2b60b70b-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12618175,Generation:2,CreationTimestamp:2020-05-26 12:41:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2b5db7d5-9f4e-11ea-99e8-0242ac110002 0xc002058707 0xc002058708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 26 12:41:34.382: INFO: Pod "test-rollover-deployment-5b8479fdb6-kmmmr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-kmmmr,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-bbx24,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bbx24/pods/test-rollover-deployment-5b8479fdb6-kmmmr,UID:2ca990e5-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12618205,Generation:0,CreationTimestamp:2020-05-26 12:41:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 2c942778-9f4e-11ea-99e8-0242ac110002 0xc001c54497 0xc001c54498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s9zgx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9zgx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-s9zgx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c54800} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c55110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:41:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:41:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:41:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:41:09 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.205,StartTime:2020-05-26 12:41:09 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-26 12:41:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://499b8cd5fbac4ee45712e12ece27ae8f837c64cc3d8ac770d4e669882c752c40}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:41:34.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-bbx24" for this suite. May 26 12:41:40.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:41:40.405: INFO: namespace: e2e-tests-deployment-bbx24, resource: bindings, ignored listing per whitelist May 26 12:41:40.520: INFO: namespace e2e-tests-deployment-bbx24 deletion completed in 6.13512055s • [SLOW TEST:48.667 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:41:40.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:41:40.671: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-p7glg" to be "success or failure" May 26 12:41:40.693: INFO: Pod "downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.115532ms May 26 12:41:42.697: INFO: Pod "downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026234164s May 26 12:41:44.700: INFO: Pod "downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029440154s May 26 12:41:46.704: INFO: Pod "downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032986862s May 26 12:41:48.706: INFO: Pod "downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035173937s May 26 12:41:50.709: INFO: Pod "downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 10.038384374s May 26 12:41:52.726: INFO: Pod "downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.055148041s STEP: Saw pod success May 26 12:41:52.726: INFO: Pod "downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:41:52.825: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:41:52.904: INFO: Waiting for pod downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018 to disappear May 26 12:41:52.915: INFO: Pod downwardapi-volume-3f4d539f-9f4e-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:41:52.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-p7glg" for this suite. May 26 12:41:58.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:41:58.947: INFO: namespace: e2e-tests-downward-api-p7glg, resource: bindings, ignored listing per whitelist May 26 12:41:58.986: INFO: namespace e2e-tests-downward-api-p7glg deletion completed in 6.068699587s • [SLOW TEST:18.466 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:41:58.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 26 12:41:59.084: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:42:19.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-8gstz" for this suite. May 26 12:42:25.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:42:25.896: INFO: namespace: e2e-tests-init-container-8gstz, resource: bindings, ignored listing per whitelist May 26 12:42:26.592: INFO: namespace e2e-tests-init-container-8gstz deletion completed in 6.752023775s • [SLOW TEST:27.606 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:42:26.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gppkd STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 12:42:26.751: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 26 12:43:10.880: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.118:8080/dial?request=hostName&protocol=udp&host=10.244.1.206&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-gppkd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 12:43:10.880: INFO: >>> kubeConfig: /root/.kube/config I0526 12:43:10.912162 6 log.go:172] (0xc001ad0420) (0xc001c7a640) Create stream I0526 12:43:10.912192 6 log.go:172] (0xc001ad0420) (0xc001c7a640) Stream added, broadcasting: 1 I0526 12:43:10.914125 6 log.go:172] (0xc001ad0420) Reply frame received for 1 I0526 12:43:10.914150 6 log.go:172] (0xc001ad0420) (0xc001510500) Create stream I0526 12:43:10.914160 6 log.go:172] (0xc001ad0420) (0xc001510500) Stream added, broadcasting: 3 I0526 12:43:10.914837 6 log.go:172] (0xc001ad0420) Reply frame received for 3 I0526 12:43:10.914869 6 log.go:172] (0xc001ad0420) (0xc002308140) Create stream I0526 12:43:10.914881 6 log.go:172] (0xc001ad0420) (0xc002308140) Stream added, broadcasting: 5 I0526 12:43:10.915411 6 log.go:172] (0xc001ad0420) Reply frame received for 5 I0526 12:43:11.428456 6 log.go:172] (0xc001ad0420) Data frame received for 3 I0526 12:43:11.428497 6 log.go:172] (0xc001510500) (3) Data frame handling I0526 12:43:11.428536 6 log.go:172] (0xc001510500) (3) Data frame sent I0526 12:43:11.428857 6 log.go:172] (0xc001ad0420) Data frame received for 3 I0526 12:43:11.428888 6 log.go:172] (0xc001510500) (3) Data frame handling I0526 12:43:11.429170 6 log.go:172] (0xc001ad0420) Data frame received for 5 I0526 12:43:11.429278 6 log.go:172] (0xc002308140) (5) Data frame handling I0526 12:43:11.430945 6 log.go:172] (0xc001ad0420) Data frame received for 1 I0526 12:43:11.431000 6 log.go:172] (0xc001c7a640) (1) Data frame handling I0526 12:43:11.431035 6 log.go:172] (0xc001c7a640) (1) Data frame sent I0526 12:43:11.431094 6 log.go:172] (0xc001ad0420) (0xc001c7a640) Stream removed, broadcasting: 1 I0526 12:43:11.431201 6 log.go:172] (0xc001ad0420) Go away received I0526 12:43:11.431222 6 log.go:172] (0xc001ad0420) (0xc001c7a640) Stream removed, broadcasting: 1 I0526 12:43:11.431257 6 log.go:172] (0xc001ad0420) (0xc001510500) Stream removed, broadcasting: 3 I0526 12:43:11.431275 6 log.go:172] (0xc001ad0420) (0xc002308140) Stream removed, broadcasting: 5 May 26 12:43:11.431: INFO: Waiting for endpoints: map[] May 26 12:43:11.434: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.118:8080/dial?request=hostName&protocol=udp&host=10.244.2.117&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-gppkd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 12:43:11.434: INFO: >>> kubeConfig: /root/.kube/config I0526 12:43:11.468901 6 log.go:172] (0xc001ad08f0) (0xc001c7a8c0) Create stream I0526 12:43:11.468932 6 log.go:172] (0xc001ad08f0) (0xc001c7a8c0) Stream added, broadcasting: 1 I0526 12:43:11.471170 6 log.go:172] (0xc001ad08f0) Reply frame received for 1 I0526 12:43:11.471200 6 log.go:172] (0xc001ad08f0) (0xc001eae000) Create stream I0526 12:43:11.471230 6 log.go:172] (0xc001ad08f0) (0xc001eae000) Stream added, broadcasting: 3 I0526 12:43:11.472120 6 log.go:172] (0xc001ad08f0) Reply frame received for 3 I0526 12:43:11.472142 6 log.go:172] (0xc001ad08f0) (0xc0015105a0) Create stream I0526 12:43:11.472154 6 log.go:172] (0xc001ad08f0) (0xc0015105a0) Stream added, broadcasting: 5 I0526 12:43:11.472968 6 log.go:172] (0xc001ad08f0) Reply frame received for 5 I0526 12:43:11.798776 6 log.go:172] (0xc001ad08f0) Data frame received for 5 I0526 12:43:11.798800 6 log.go:172] (0xc0015105a0) (5) Data frame handling I0526 12:43:11.798821 6 log.go:172] (0xc001ad08f0) Data frame received for 3 I0526 12:43:11.798892 6 log.go:172] (0xc001eae000) (3) Data frame handling I0526 12:43:11.798935 6 log.go:172] (0xc001eae000) (3) Data frame sent I0526 12:43:11.798958 6 log.go:172] (0xc001ad08f0) Data frame received for 3 I0526 12:43:11.798978 6 log.go:172] (0xc001eae000) (3) Data frame handling I0526 12:43:11.799722 6 log.go:172] (0xc001ad08f0) Data frame received for 1 I0526 12:43:11.799738 6 log.go:172] (0xc001c7a8c0) (1) Data frame handling I0526 12:43:11.799748 6 log.go:172] (0xc001c7a8c0) (1) Data frame sent I0526 12:43:11.799766 6 log.go:172] (0xc001ad08f0) (0xc001c7a8c0) Stream removed, broadcasting: 1 I0526 12:43:11.799781 6 log.go:172] (0xc001ad08f0) Go away received I0526 12:43:11.799879 6 log.go:172] (0xc001ad08f0) (0xc001c7a8c0) Stream removed, broadcasting: 1 I0526 12:43:11.799899 6 log.go:172] (0xc001ad08f0) (0xc001eae000) Stream removed, broadcasting: 3 I0526 12:43:11.799913 6 log.go:172] (0xc001ad08f0) (0xc0015105a0) Stream removed, broadcasting: 5 May 26 12:43:11.799: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:43:11.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-gppkd" for this suite. May 26 12:43:35.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:43:36.013: INFO: namespace: e2e-tests-pod-network-test-gppkd, resource: bindings, ignored listing per whitelist May 26 12:43:36.068: INFO: namespace e2e-tests-pod-network-test-gppkd deletion completed in 24.265431083s • [SLOW TEST:69.475 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:43:36.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-842981eb-9f4e-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 12:43:36.216: INFO: Waiting up to 5m0s for pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018" in namespace "e2e-tests-secrets-g6pxx" to be "success or failure" May 26 12:43:36.229: INFO: Pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.670898ms May 26 12:43:38.232: INFO: Pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016037098s May 26 12:43:40.235: INFO: Pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018846305s May 26 12:43:42.238: INFO: Pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021825382s May 26 12:43:44.242: INFO: Pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025919228s May 26 12:43:46.245: INFO: Pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029587437s May 26 12:43:48.253: INFO: Pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.037247799s May 26 12:43:50.256: INFO: Pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.040093369s May 26 12:43:52.259: INFO: Pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.04337498s STEP: Saw pod success May 26 12:43:52.259: INFO: Pod "pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:43:52.262: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018 container secret-volume-test: STEP: delete the pod May 26 12:43:52.277: INFO: Waiting for pod pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018 to disappear May 26 12:43:52.282: INFO: Pod pod-secrets-842e651f-9f4e-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:43:52.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-g6pxx" for this suite. May 26 12:43:58.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:43:58.348: INFO: namespace: e2e-tests-secrets-g6pxx, resource: bindings, ignored listing per whitelist May 26 12:43:58.392: INFO: namespace e2e-tests-secrets-g6pxx deletion completed in 6.107698234s • [SLOW TEST:22.324 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:43:58.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 12:43:58.691: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"918e4826-9f4e-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00265ef52), BlockOwnerDeletion:(*bool)(0xc00265ef53)}} May 26 12:43:58.779: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"917f1336-9f4e-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00230f762), BlockOwnerDeletion:(*bool)(0xc00230f763)}} May 26 12:43:58.799: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"917f9a51-9f4e-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002545482), BlockOwnerDeletion:(*bool)(0xc002545483)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:44:03.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-85k5t" for this suite. May 26 12:44:09.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:44:09.987: INFO: namespace: e2e-tests-gc-85k5t, resource: bindings, ignored listing per whitelist May 26 12:44:10.012: INFO: namespace e2e-tests-gc-85k5t deletion completed in 6.076359927s • [SLOW TEST:11.619 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:44:10.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 12:44:10.187: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:44:11.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-fr86b" for this suite. May 26 12:44:17.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:44:17.342: INFO: namespace: e2e-tests-custom-resource-definition-fr86b, resource: bindings, ignored listing per whitelist May 26 12:44:17.435: INFO: namespace e2e-tests-custom-resource-definition-fr86b deletion completed in 6.141683727s • [SLOW TEST:7.423 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:44:17.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-j6nj2 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-j6nj2 to expose endpoints map[] May 26 12:44:17.649: INFO: Get endpoints failed (22.050447ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 26 12:44:18.674: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-j6nj2 exposes endpoints map[] (1.046781679s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-j6nj2 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-j6nj2 to expose endpoints map[pod1:[80]] May 26 12:44:22.742: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.062478293s elapsed, will retry) May 26 12:44:27.771: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.091949023s elapsed, will retry) May 26 12:44:30.788: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-j6nj2 exposes endpoints map[pod1:[80]] (12.108191306s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-j6nj2 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-j6nj2 to expose endpoints map[pod2:[80] pod1:[80]] May 26 12:44:34.850: INFO: Unexpected endpoints: found map[9d7e1ddb-9f4e-11ea-99e8-0242ac110002:[80]], expected map[pod2:[80] pod1:[80]] (4.058626668s elapsed, will retry) May 26 12:44:39.894: INFO: Unexpected endpoints: found map[9d7e1ddb-9f4e-11ea-99e8-0242ac110002:[80]], expected map[pod1:[80] pod2:[80]] (9.103140223s elapsed, will retry) May 26 12:44:45.164: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-j6nj2 exposes endpoints map[pod1:[80] pod2:[80]] (14.372646188s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-j6nj2 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-j6nj2 to expose endpoints map[pod2:[80]] May 26 12:44:46.863: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-j6nj2 exposes endpoints map[pod2:[80]] (1.695005917s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-j6nj2 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-j6nj2 to expose endpoints map[] May 26 12:44:48.255: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-j6nj2 exposes endpoints map[] (1.368587962s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:44:48.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-j6nj2" for this suite. May 26 12:45:16.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:45:16.817: INFO: namespace: e2e-tests-services-j6nj2, resource: bindings, ignored listing per whitelist May 26 12:45:16.840: INFO: namespace e2e-tests-services-j6nj2 deletion completed in 28.069744123s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:59.406 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:45:16.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 26 12:45:16.929: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:45:40.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-sd4ql" for this suite. May 26 12:46:02.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:46:02.158: INFO: namespace: e2e-tests-init-container-sd4ql, resource: bindings, ignored listing per whitelist May 26 12:46:02.197: INFO: namespace e2e-tests-init-container-sd4ql deletion completed in 22.06625653s • [SLOW TEST:45.356 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:46:02.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-db813088-9f4e-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 12:46:02.934: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-lgwms" to be "success or failure" May 26 12:46:03.042: INFO: Pod "pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 108.824635ms May 26 12:46:05.046: INFO: Pod "pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112735061s May 26 12:46:07.066: INFO: Pod "pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132205565s May 26 12:46:09.083: INFO: Pod "pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14965514s May 26 12:46:11.088: INFO: Pod "pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15456373s May 26 12:46:13.091: INFO: Pod "pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157885739s May 26 12:46:15.095: INFO: Pod "pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.16139418s STEP: Saw pod success May 26 12:46:15.095: INFO: Pod "pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:46:15.098: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 26 12:46:15.550: INFO: Waiting for pod pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018 to disappear May 26 12:46:15.999: INFO: Pod pod-projected-configmaps-db863b7a-9f4e-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:46:15.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lgwms" for this suite. May 26 12:46:22.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:46:22.109: INFO: namespace: e2e-tests-projected-lgwms, resource: bindings, ignored listing per whitelist May 26 12:46:22.141: INFO: namespace e2e-tests-projected-lgwms deletion completed in 6.138624971s • [SLOW TEST:19.944 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:46:22.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 12:46:22.288: INFO: Creating deployment "nginx-deployment" May 26 12:46:22.304: INFO: Waiting for observed generation 1 May 26 12:46:24.313: INFO: Waiting for all required pods to come up May 26 12:46:24.317: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 26 12:47:12.323: INFO: Waiting for deployment "nginx-deployment" to complete May 26 12:47:12.327: INFO: Updating deployment "nginx-deployment" with a non-existent image May 26 12:47:12.333: INFO: Updating deployment nginx-deployment May 26 12:47:12.333: INFO: Waiting for observed generation 2 May 26 12:47:14.357: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 26 12:47:14.358: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 26 12:47:14.360: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 26 12:47:14.366: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 26 12:47:14.366: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 26 12:47:14.368: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 26 12:47:14.371: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 26 12:47:14.371: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 26 12:47:14.375: INFO: Updating deployment nginx-deployment May 26 12:47:14.375: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 26 12:47:14.465: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 26 12:47:14.715: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 26 12:47:16.892: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dg4s7/deployments/nginx-deployment,UID:e72bd626-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12619502,Generation:3,CreationTimestamp:2020-05-26 12:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-26 12:47:14 +0000 UTC 2020-05-26 12:47:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-26 12:47:15 +0000 UTC 2020-05-26 12:46:22 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 26 12:47:16.895: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dg4s7/replicasets/nginx-deployment-5c98f8fb5,UID:0500038b-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619496,Generation:3,CreationTimestamp:2020-05-26 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e72bd626-9f4e-11ea-99e8-0242ac110002 0xc0017a84f7 0xc0017a84f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 26 12:47:16.895: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 26 12:47:16.895: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dg4s7/replicasets/nginx-deployment-85ddf47c5d,UID:e72f387c-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12619479,Generation:3,CreationTimestamp:2020-05-26 12:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e72bd626-9f4e-11ea-99e8-0242ac110002 0xc0017a85b7 0xc0017a85b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 26 12:47:16.901: INFO: Pod "nginx-deployment-5c98f8fb5-8j4s8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8j4s8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-8j4s8,UID:0663fe2d-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619457,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0017a9550 0xc0017a9551}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a95d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a95f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.901: INFO: Pod "nginx-deployment-5c98f8fb5-95s6n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-95s6n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-95s6n,UID:06641acd-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619530,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0017a9667 0xc0017a9668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a9720} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a9740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-26 12:47:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.901: INFO: Pod "nginx-deployment-5c98f8fb5-cf9j6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cf9j6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-cf9j6,UID:0537896e-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619419,Generation:0,CreationTimestamp:2020-05-26 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0017a9907 0xc0017a9908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a9980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a99a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-26 12:47:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.901: INFO: Pod "nginx-deployment-5c98f8fb5-hz42h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hz42h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-hz42h,UID:0510a766-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619394,Generation:0,CreationTimestamp:2020-05-26 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0017a9a97 0xc0017a9a98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a9bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a9bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-26 12:47:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.901: INFO: Pod "nginx-deployment-5c98f8fb5-j4jls" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-j4jls,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-j4jls,UID:0681dcc8-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619485,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0017a9c97 0xc0017a9c98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a9e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a9e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.902: INFO: Pod "nginx-deployment-5c98f8fb5-jjd97" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jjd97,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-jjd97,UID:053275c2-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619416,Generation:0,CreationTimestamp:2020-05-26 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0017a9eb7 0xc0017a9eb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017a9f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017a9f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-26 12:47:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.902: INFO: Pod "nginx-deployment-5c98f8fb5-jqm6v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jqm6v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-jqm6v,UID:066bf45e-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619474,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0016da0e7 0xc0016da0e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016da250} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016da2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.902: INFO: Pod "nginx-deployment-5c98f8fb5-lpf95" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lpf95,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-lpf95,UID:066bf96d-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619480,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0016da3e7 0xc0016da3e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016da600} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016da620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.902: INFO: Pod "nginx-deployment-5c98f8fb5-ntdxp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ntdxp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-ntdxp,UID:06453fc5-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619499,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0016daa37 0xc0016daa38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016dab20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016dab40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-26 12:47:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.902: INFO: Pod "nginx-deployment-5c98f8fb5-qfz2j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qfz2j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-qfz2j,UID:066bfc38-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619478,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0016daca7 0xc0016daca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016dad40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016dada0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.902: INFO: Pod "nginx-deployment-5c98f8fb5-swrbs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-swrbs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-swrbs,UID:051999e7-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619407,Generation:0,CreationTimestamp:2020-05-26 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0016dae17 0xc0016dae18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016dae90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016daeb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-26 12:47:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.903: INFO: Pod "nginx-deployment-5c98f8fb5-w58t4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-w58t4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-w58t4,UID:0519a9b1-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619413,Generation:0,CreationTimestamp:2020-05-26 12:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0016db037 0xc0016db038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016db0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016db0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-26 12:47:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.903: INFO: Pod "nginx-deployment-5c98f8fb5-wjmrf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wjmrf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-5c98f8fb5-wjmrf,UID:066bf762-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619481,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0500038b-9f4f-11ea-99e8-0242ac110002 0xc0016db1e7 0xc0016db1e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016db260} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016db280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.903: INFO: Pod "nginx-deployment-85ddf47c5d-556ss" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-556ss,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-556ss,UID:06456155-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619500,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0016db2f7 0xc0016db2f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016db370} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016db390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-26 12:47:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.903: INFO: Pod "nginx-deployment-85ddf47c5d-5r4qt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5r4qt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-5r4qt,UID:e73dbae3-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12619293,Generation:0,CreationTimestamp:2020-05-26 12:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0016db447 0xc0016db448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016db4c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016db4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.212,StartTime:2020-05-26 12:46:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-26 12:46:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bac452fc253793c6d3c54dc05e3c86726ac4c66e221648c70f4e478b6b8302a2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.903: INFO: Pod "nginx-deployment-85ddf47c5d-6xqgq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6xqgq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-6xqgq,UID:0664b76e-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619467,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0016db5a7 0xc0016db5a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016db760} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016db780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.903: INFO: Pod "nginx-deployment-85ddf47c5d-99dzm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-99dzm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-99dzm,UID:e73dc612-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12619322,Generation:0,CreationTimestamp:2020-05-26 12:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0016db7f7 0xc0016db7f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016db870} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016db890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.127,StartTime:2020-05-26 12:46:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-26 12:46:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2edc9a91b0d9a8873809d5cd61a355091e76a211a2c8c854ae58a03ec6f03ec7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.903: INFO: Pod "nginx-deployment-85ddf47c5d-b87hs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b87hs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-b87hs,UID:0645566d-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619515,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0016db9d7 0xc0016db9d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016dbb10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016dbb30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-26 12:47:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.904: INFO: Pod "nginx-deployment-85ddf47c5d-bzglw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bzglw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-bzglw,UID:06455b69-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619493,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0016dbbe7 0xc0016dbbe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016dbc60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016dbd40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-26 12:47:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.904: INFO: Pod "nginx-deployment-85ddf47c5d-f8dhh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f8dhh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-f8dhh,UID:0664b0cd-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619469,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0016dbdf7 0xc0016dbdf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016dbe80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016dbea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.904: INFO: Pod "nginx-deployment-85ddf47c5d-hfgxg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hfgxg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-hfgxg,UID:e7395ec4-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12619269,Generation:0,CreationTimestamp:2020-05-26 12:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0016dbf17 0xc0016dbf18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016dbf90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016dbfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.211,StartTime:2020-05-26 12:46:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-26 12:46:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://82fe48090cd767460f9ad8712795f17a403fea0119bd2e5b256130f3cb93cfb0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.904: INFO: Pod "nginx-deployment-85ddf47c5d-jlcml" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jlcml,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-jlcml,UID:0642044c-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619476,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc001d4a0b7 0xc001d4a0b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d4a2c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d4a330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-26 12:47:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.904: INFO: Pod "nginx-deployment-85ddf47c5d-l6sb7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l6sb7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-l6sb7,UID:0642644e-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619475,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc001d4a597 0xc001d4a598}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d4a790} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d4a7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-26 12:47:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.904: INFO: Pod "nginx-deployment-85ddf47c5d-ljvg5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ljvg5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-ljvg5,UID:0664a40d-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619535,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc001d4a917 0xc001d4a918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d4ac60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d4ac80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-26 12:47:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.905: INFO: Pod "nginx-deployment-85ddf47c5d-lv7vx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lv7vx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-lv7vx,UID:066498bc-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619468,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc001d4af37 0xc001d4af38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d4b060} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d4b080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.905: INFO: Pod "nginx-deployment-85ddf47c5d-rjzn5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rjzn5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-rjzn5,UID:e73dacfe-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12619283,Generation:0,CreationTimestamp:2020-05-26 12:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc001d4b157 0xc001d4b158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d4b3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d4b3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.125,StartTime:2020-05-26 12:46:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-26 12:46:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fab82afa8f6c468ea71c9e2e6c4001cb2f68d9684bbb1c003d2ec56955b24ff3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.905: INFO: Pod "nginx-deployment-85ddf47c5d-sljng" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sljng,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-sljng,UID:e7396312-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12619255,Generation:0,CreationTimestamp:2020-05-26 12:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc001d4b4e7 0xc001d4b4e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d4b780} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d4b7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.210,StartTime:2020-05-26 12:46:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-26 12:46:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://275a2a572a7cc24cdf1f6bafa4889b54bf4b853baa00ae83479b500659c3cbc8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.905: INFO: Pod "nginx-deployment-85ddf47c5d-sqt7p" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sqt7p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-sqt7p,UID:e73dd0b6-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12619308,Generation:0,CreationTimestamp:2020-05-26 12:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc001d4b997 0xc001d4b998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d4bc20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d4bc60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.126,StartTime:2020-05-26 12:46:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-26 12:46:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://66cdf91a2f48e5ecd29bdcb7547228c445187bd192f851b69640bf1dc9962fff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.905: INFO: Pod "nginx-deployment-85ddf47c5d-tj7gf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tj7gf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-tj7gf,UID:e733fd32-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12619241,Generation:0,CreationTimestamp:2020-05-26 12:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc001d4bfc7 0xc001d4bfc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012e8330} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012e8350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.124,StartTime:2020-05-26 12:46:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-26 12:46:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0b1f405a77a056eec9a35b4d54e7114b7531d4d26ebfcea7b6f89b64d5ed8908}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.906: INFO: Pod "nginx-deployment-85ddf47c5d-tvjdd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tvjdd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-tvjdd,UID:06426a56-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619492,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0014f8027 0xc0014f8028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014f8110} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014f81e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-26 12:47:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.906: INFO: Pod "nginx-deployment-85ddf47c5d-vjbsg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vjbsg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-vjbsg,UID:064552aa-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619528,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0014f8387 0xc0014f8388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014f8920} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014f8940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-26 12:47:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.906: INFO: Pod "nginx-deployment-85ddf47c5d-vpnrz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vpnrz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-vpnrz,UID:0664b28d-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12619465,Generation:0,CreationTimestamp:2020-05-26 12:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0014f8a17 0xc0014f8a18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014f8a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014f8ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 26 12:47:16.906: INFO: Pod "nginx-deployment-85ddf47c5d-xfbk5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xfbk5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-dg4s7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dg4s7/pods/nginx-deployment-85ddf47c5d-xfbk5,UID:e7402add-9f4e-11ea-99e8-0242ac110002,ResourceVersion:12619339,Generation:0,CreationTimestamp:2020-05-26 12:46:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e72f387c-9f4e-11ea-99e8-0242ac110002 0xc0018a20e7 0xc0018a20e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9l5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9l5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9l5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018a2160} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018a2180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:47:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:46:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.213,StartTime:2020-05-26 12:46:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-26 12:47:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://240746a58ff89d508b8999ac5116822869b2a6d0b1489235f09094e229d2886b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:47:16.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-dg4s7" for this suite. May 26 12:47:28.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:47:28.996: INFO: namespace: e2e-tests-deployment-dg4s7, resource: bindings, ignored listing per whitelist May 26 12:47:29.037: INFO: namespace e2e-tests-deployment-dg4s7 deletion completed in 12.127498343s • [SLOW TEST:66.895 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:47:29.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 26 12:47:29.581: INFO: Waiting up to 5m0s for pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018" in namespace "e2e-tests-containers-m5km7" to be "success or failure" May 26 12:47:29.584: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.675262ms May 26 12:47:31.623: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041416798s May 26 12:47:33.626: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044521413s May 26 12:47:35.629: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047537757s May 26 12:47:37.635: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053466205s May 26 12:47:39.659: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.077181704s May 26 12:47:41.665: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.083232062s May 26 12:47:43.668: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.086562882s May 26 12:47:45.671: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.089602872s May 26 12:47:47.673: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.09184771s May 26 12:47:49.695: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.113208222s May 26 12:47:51.698: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.116568539s May 26 12:47:53.701: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 24.119821596s May 26 12:47:55.704: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.122615919s May 26 12:47:57.707: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.12596388s May 26 12:47:59.711: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.129566178s May 26 12:48:01.714: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.132861587s May 26 12:48:03.737: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.156054347s May 26 12:48:05.740: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 36.158580843s May 26 12:48:07.743: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 38.16132077s May 26 12:48:09.761: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 40.179773804s May 26 12:48:11.809: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 42.227724598s May 26 12:48:13.813: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 44.231255743s May 26 12:48:15.816: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 46.234313279s May 26 12:48:17.819: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 48.237661606s May 26 12:48:19.822: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 50.24112882s May 26 12:48:21.839: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 52.257913814s May 26 12:48:23.843: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 54.261561426s May 26 12:48:25.846: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 56.264571062s May 26 12:48:27.849: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 58.267618477s May 26 12:48:29.851: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.269835468s May 26 12:48:31.941: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.359831226s May 26 12:48:33.944: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.362507827s May 26 12:48:35.947: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m6.365611821s May 26 12:48:37.951: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m8.369590127s STEP: Saw pod success May 26 12:48:37.951: INFO: Pod "client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:48:37.955: INFO: Trying to get logs from node hunter-worker2 pod client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 12:48:37.998: INFO: Waiting for pod client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018 to disappear May 26 12:48:38.015: INFO: Pod client-containers-0f3faeba-9f4f-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:48:38.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-m5km7" for this suite. May 26 12:48:44.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:48:44.054: INFO: namespace: e2e-tests-containers-m5km7, resource: bindings, ignored listing per whitelist May 26 12:48:44.084: INFO: namespace e2e-tests-containers-m5km7 deletion completed in 6.066155426s • [SLOW TEST:75.047 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:48:44.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:49:00.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-gw2r7" for this suite. May 26 12:50:00.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:50:00.210: INFO: namespace: e2e-tests-kubelet-test-gw2r7, resource: bindings, ignored listing per whitelist May 26 12:50:00.263: INFO: namespace e2e-tests-kubelet-test-gw2r7 deletion completed in 1m0.077635328s • [SLOW TEST:76.179 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:50:00.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:50:00.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-lwl6p" to be "success or failure" May 26 12:50:00.463: INFO: Pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.897313ms May 26 12:50:02.467: INFO: Pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041225121s May 26 12:50:04.471: INFO: Pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045052498s May 26 12:50:06.474: INFO: Pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048361844s May 26 12:50:08.477: INFO: Pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051412756s May 26 12:50:10.480: INFO: Pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054858262s May 26 12:50:12.484: INFO: Pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.058453709s May 26 12:50:14.487: INFO: Pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 14.061942817s May 26 12:50:16.554: INFO: Pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.128109214s STEP: Saw pod success May 26 12:50:16.554: INFO: Pod "downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:50:16.556: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:50:16.604: INFO: Waiting for pod downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018 to disappear May 26 12:50:16.618: INFO: Pod downwardapi-volume-692fcd29-9f4f-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:50:16.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lwl6p" for this suite. May 26 12:50:22.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:50:22.653: INFO: namespace: e2e-tests-projected-lwl6p, resource: bindings, ignored listing per whitelist May 26 12:50:22.692: INFO: namespace e2e-tests-projected-lwl6p deletion completed in 6.070680626s • [SLOW TEST:22.429 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:50:22.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 12:50:23.039: INFO: Creating ReplicaSet my-hostname-basic-76ab687e-9f4f-11ea-b1d1-0242ac110018 May 26 12:50:23.114: INFO: Pod name my-hostname-basic-76ab687e-9f4f-11ea-b1d1-0242ac110018: Found 0 pods out of 1 May 26 12:50:28.116: INFO: Pod name my-hostname-basic-76ab687e-9f4f-11ea-b1d1-0242ac110018: Found 1 pods out of 1 May 26 12:50:28.116: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-76ab687e-9f4f-11ea-b1d1-0242ac110018" is running May 26 12:50:38.123: INFO: Pod "my-hostname-basic-76ab687e-9f4f-11ea-b1d1-0242ac110018-xxk8h" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 12:50:23 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 12:50:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-76ab687e-9f4f-11ea-b1d1-0242ac110018]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 12:50:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-76ab687e-9f4f-11ea-b1d1-0242ac110018]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 12:50:23 +0000 UTC Reason: Message:}]) May 26 12:50:38.123: INFO: Trying to dial the pod May 26 12:50:43.146: INFO: Controller my-hostname-basic-76ab687e-9f4f-11ea-b1d1-0242ac110018: Got expected result from replica 1 [my-hostname-basic-76ab687e-9f4f-11ea-b1d1-0242ac110018-xxk8h]: "my-hostname-basic-76ab687e-9f4f-11ea-b1d1-0242ac110018-xxk8h", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:50:43.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-qqnlh" for this suite. May 26 12:50:49.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:50:49.225: INFO: namespace: e2e-tests-replicaset-qqnlh, resource: bindings, ignored listing per whitelist May 26 12:50:49.236: INFO: namespace e2e-tests-replicaset-qqnlh deletion completed in 6.087200836s • [SLOW TEST:26.544 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:50:49.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 26 12:50:49.356: INFO: Waiting up to 5m0s for pod "pod-8658e298-9f4f-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-zjxgv" to be "success or failure" May 26 12:50:49.359: INFO: Pod "pod-8658e298-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.707783ms May 26 12:50:51.363: INFO: Pod "pod-8658e298-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007193874s May 26 12:50:53.366: INFO: Pod "pod-8658e298-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010731992s May 26 12:50:55.370: INFO: Pod "pod-8658e298-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013991799s May 26 12:50:57.373: INFO: Pod "pod-8658e298-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017463918s May 26 12:50:59.376: INFO: Pod "pod-8658e298-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020281432s May 26 12:51:01.379: INFO: Pod "pod-8658e298-9f4f-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.023720639s May 26 12:51:03.383: INFO: Pod "pod-8658e298-9f4f-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.026935498s STEP: Saw pod success May 26 12:51:03.383: INFO: Pod "pod-8658e298-9f4f-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:51:03.385: INFO: Trying to get logs from node hunter-worker2 pod pod-8658e298-9f4f-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 12:51:03.406: INFO: Waiting for pod pod-8658e298-9f4f-11ea-b1d1-0242ac110018 to disappear May 26 12:51:03.416: INFO: Pod pod-8658e298-9f4f-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:51:03.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zjxgv" for this suite. May 26 12:51:09.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:51:09.488: INFO: namespace: e2e-tests-emptydir-zjxgv, resource: bindings, ignored listing per whitelist May 26 12:51:09.503: INFO: namespace e2e-tests-emptydir-zjxgv deletion completed in 6.084937784s • [SLOW TEST:20.267 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:51:09.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 26 12:51:10.131: INFO: created pod pod-service-account-defaultsa May 26 12:51:10.131: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 26 12:51:10.160: INFO: created pod pod-service-account-mountsa May 26 12:51:10.160: INFO: pod pod-service-account-mountsa service account token volume mount: true May 26 12:51:10.174: INFO: created pod pod-service-account-nomountsa May 26 12:51:10.174: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 26 12:51:10.223: INFO: created pod pod-service-account-defaultsa-mountspec May 26 12:51:10.223: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 26 12:51:10.241: INFO: created pod pod-service-account-mountsa-mountspec May 26 12:51:10.241: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 26 12:51:10.286: INFO: created pod pod-service-account-nomountsa-mountspec May 26 12:51:10.286: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 26 12:51:10.315: INFO: created pod pod-service-account-defaultsa-nomountspec May 26 12:51:10.315: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 26 12:51:10.339: INFO: created pod pod-service-account-mountsa-nomountspec May 26 12:51:10.339: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 26 12:51:10.369: INFO: created pod pod-service-account-nomountsa-nomountspec May 26 12:51:10.369: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:51:10.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-6dxld" for this suite. May 26 12:51:34.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:51:34.578: INFO: namespace: e2e-tests-svcaccounts-6dxld, resource: bindings, ignored listing per whitelist May 26 12:51:34.581: INFO: namespace e2e-tests-svcaccounts-6dxld deletion completed in 24.154736496s • [SLOW TEST:25.078 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:51:34.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:51:34.674: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-zfnks" to be "success or failure" May 26 12:51:34.678: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.537722ms May 26 12:51:36.680: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006102352s May 26 12:51:38.683: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008997529s May 26 12:51:40.686: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012356368s May 26 12:51:42.690: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01551852s May 26 12:51:44.693: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019020866s May 26 12:51:46.696: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022333163s May 26 12:51:48.700: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.025778814s May 26 12:51:50.703: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.029125525s May 26 12:51:52.707: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.032965283s May 26 12:51:54.711: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.036511808s STEP: Saw pod success May 26 12:51:54.711: INFO: Pod "downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:51:54.713: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:51:54.748: INFO: Waiting for pod downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018 to disappear May 26 12:51:54.760: INFO: Pod downwardapi-volume-a15af059-9f4f-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:51:54.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zfnks" for this suite. May 26 12:52:00.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:52:00.800: INFO: namespace: e2e-tests-projected-zfnks, resource: bindings, ignored listing per whitelist May 26 12:52:00.843: INFO: namespace e2e-tests-projected-zfnks deletion completed in 6.081040679s • [SLOW TEST:26.262 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:52:00.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0526 12:52:11.159579 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 12:52:11.159: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:52:11.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5kktz" for this suite. May 26 12:52:19.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:52:19.202: INFO: namespace: e2e-tests-gc-5kktz, resource: bindings, ignored listing per whitelist May 26 12:52:19.238: INFO: namespace e2e-tests-gc-5kktz deletion completed in 8.076088161s • [SLOW TEST:18.394 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:52:19.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0526 12:52:29.397031 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 12:52:29.397: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:52:29.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mggwg" for this suite. May 26 12:52:35.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:52:35.440: INFO: namespace: e2e-tests-gc-mggwg, resource: bindings, ignored listing per whitelist May 26 12:52:35.505: INFO: namespace e2e-tests-gc-mggwg deletion completed in 6.104615652s • [SLOW TEST:16.267 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:52:35.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 26 12:52:35.601: INFO: Waiting up to 5m0s for pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-cpsb6" to be "success or failure" May 26 12:52:35.619: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.191896ms May 26 12:52:37.623: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02171076s May 26 12:52:39.626: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025346273s May 26 12:52:41.629: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028546544s May 26 12:52:43.633: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032342491s May 26 12:52:45.636: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.035285571s May 26 12:52:47.639: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.037740906s May 26 12:52:49.642: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.041455199s May 26 12:52:51.646: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.045104823s May 26 12:52:53.649: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.048616692s May 26 12:52:55.653: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.052025544s May 26 12:52:57.656: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.055212572s May 26 12:52:59.706: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 24.104768508s May 26 12:53:01.709: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.108431798s May 26 12:53:03.735: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.134316226s STEP: Saw pod success May 26 12:53:03.735: INFO: Pod "pod-c5adb112-9f4f-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:53:03.737: INFO: Trying to get logs from node hunter-worker pod pod-c5adb112-9f4f-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 12:53:03.813: INFO: Waiting for pod pod-c5adb112-9f4f-11ea-b1d1-0242ac110018 to disappear May 26 12:53:03.818: INFO: Pod pod-c5adb112-9f4f-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:53:03.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cpsb6" for this suite. May 26 12:53:09.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:53:09.925: INFO: namespace: e2e-tests-emptydir-cpsb6, resource: bindings, ignored listing per whitelist May 26 12:53:09.981: INFO: namespace e2e-tests-emptydir-cpsb6 deletion completed in 6.13893025s • [SLOW TEST:34.476 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:53:09.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 26 12:53:10.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gp22m' May 26 12:53:13.817: INFO: stderr: "" May 26 12:53:13.817: INFO: stdout: "pod/pause created\n" May 26 12:53:13.817: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 26 12:53:13.817: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-gp22m" to be "running and ready" May 26 12:53:13.822: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294257ms May 26 12:53:15.825: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007397833s May 26 12:53:17.827: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009616783s May 26 12:53:19.831: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013021324s May 26 12:53:21.833: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015562078s May 26 12:53:23.836: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018907476s May 26 12:53:25.839: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.021734257s May 26 12:53:27.842: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 14.024758233s May 26 12:53:27.842: INFO: Pod "pause" satisfied condition "running and ready" May 26 12:53:27.842: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 26 12:53:27.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-gp22m' May 26 12:53:27.948: INFO: stderr: "" May 26 12:53:27.948: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 26 12:53:27.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-gp22m' May 26 12:53:28.050: INFO: stderr: "" May 26 12:53:28.050: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 15s testing-label-value\n" STEP: removing the label testing-label of a pod May 26 12:53:28.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-gp22m' May 26 12:53:28.136: INFO: stderr: "" May 26 12:53:28.136: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 26 12:53:28.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-gp22m' May 26 12:53:28.229: INFO: stderr: "" May 26 12:53:28.229: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 15s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 26 12:53:28.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gp22m' May 26 12:53:28.323: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 12:53:28.323: INFO: stdout: "pod \"pause\" force deleted\n" May 26 12:53:28.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-gp22m' May 26 12:53:28.424: INFO: stderr: "No resources found.\n" May 26 12:53:28.424: INFO: stdout: "" May 26 12:53:28.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-gp22m -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 26 12:53:28.515: INFO: stderr: "" May 26 12:53:28.515: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:53:28.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gp22m" for this suite. May 26 12:53:34.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:53:34.553: INFO: namespace: e2e-tests-kubectl-gp22m, resource: bindings, ignored listing per whitelist May 26 12:53:34.596: INFO: namespace e2e-tests-kubectl-gp22m deletion completed in 6.078134465s • [SLOW TEST:24.614 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:53:34.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 12:53:34.713: INFO: Creating deployment "test-recreate-deployment" May 26 12:53:34.716: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 26 12:53:34.752: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 26 12:53:36.759: INFO: Waiting deployment "test-recreate-deployment" to complete May 26 12:53:36.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:53:38.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:53:40.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:53:42.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:53:44.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:53:46.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:53:48.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:53:50.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726094414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 12:53:52.767: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 26 12:53:52.772: INFO: Updating deployment test-recreate-deployment May 26 12:53:52.772: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 26 12:53:53.122: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-qpcb6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qpcb6/deployments/test-recreate-deployment,UID:e8eaac6e-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12620974,Generation:2,CreationTimestamp:2020-05-26 12:53:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-26 12:53:53 +0000 UTC 2020-05-26 12:53:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-26 12:53:53 +0000 UTC 2020-05-26 12:53:34 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 26 12:53:53.142: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-qpcb6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qpcb6/replicasets/test-recreate-deployment-589c4bfd,UID:f3c9f593-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12620972,Generation:1,CreationTimestamp:2020-05-26 12:53:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e8eaac6e-9f4f-11ea-99e8-0242ac110002 0xc001f5daff 0xc001f5db10}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 26 12:53:53.142: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 26 12:53:53.142: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-qpcb6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qpcb6/replicasets/test-recreate-deployment-5bf7f65dc,UID:e8f076e6-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12620963,Generation:2,CreationTimestamp:2020-05-26 12:53:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e8eaac6e-9f4f-11ea-99e8-0242ac110002 0xc001f5dc60 0xc001f5dc61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 26 12:53:53.147: INFO: Pod "test-recreate-deployment-589c4bfd-w4n9x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-w4n9x,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-qpcb6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qpcb6/pods/test-recreate-deployment-589c4bfd-w4n9x,UID:f3ccbb32-9f4f-11ea-99e8-0242ac110002,ResourceVersion:12620975,Generation:0,CreationTimestamp:2020-05-26 12:53:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd f3c9f593-9f4f-11ea-99e8-0242ac110002 0xc00205a94f 0xc00205a960}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vwfnv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vwfnv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vwfnv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00205a9d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00205a9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:53:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:53:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:53:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 12:53:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-26 12:53:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:53:53.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-qpcb6" for this suite. May 26 12:53:59.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:53:59.194: INFO: namespace: e2e-tests-deployment-qpcb6, resource: bindings, ignored listing per whitelist May 26 12:53:59.257: INFO: namespace e2e-tests-deployment-qpcb6 deletion completed in 6.107677836s • [SLOW TEST:24.661 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:53:59.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 26 12:53:59.353: INFO: Waiting up to 5m0s for pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-s5vxp" to be "success or failure" May 26 12:53:59.401: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 47.566532ms May 26 12:54:01.419: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065728631s May 26 12:54:03.422: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069025153s May 26 12:54:05.507: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154329576s May 26 12:54:07.545: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191856353s May 26 12:54:09.548: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.195122234s May 26 12:54:11.551: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.197960694s May 26 12:54:13.833: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.480382762s May 26 12:54:15.837: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.483871896s May 26 12:54:17.839: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 18.48620434s May 26 12:54:19.847: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 20.493881849s May 26 12:54:22.228: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 22.875201889s May 26 12:54:24.231: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.878195748s STEP: Saw pod success May 26 12:54:24.231: INFO: Pod "pod-f79980d4-9f4f-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:54:24.234: INFO: Trying to get logs from node hunter-worker2 pod pod-f79980d4-9f4f-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 12:54:24.872: INFO: Waiting for pod pod-f79980d4-9f4f-11ea-b1d1-0242ac110018 to disappear May 26 12:54:25.156: INFO: Pod pod-f79980d4-9f4f-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:54:25.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-s5vxp" for this suite. May 26 12:54:31.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:54:31.383: INFO: namespace: e2e-tests-emptydir-s5vxp, resource: bindings, ignored listing per whitelist May 26 12:54:31.439: INFO: namespace e2e-tests-emptydir-s5vxp deletion completed in 6.279546156s • [SLOW TEST:32.182 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:54:31.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-8j77f STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-8j77f STEP: Deleting pre-stop pod May 26 12:55:12.568: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:55:12.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-8j77f" for this suite. May 26 12:55:52.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:55:53.016: INFO: namespace: e2e-tests-prestop-8j77f, resource: bindings, ignored listing per whitelist May 26 12:55:53.038: INFO: namespace e2e-tests-prestop-8j77f deletion completed in 40.428184454s • [SLOW TEST:81.598 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:55:53.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 12:55:53.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-6w6jt" to be "success or failure" May 26 12:55:53.241: INFO: Pod "downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 52.798211ms May 26 12:55:55.244: INFO: Pod "downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055986263s May 26 12:55:57.247: INFO: Pod "downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059300321s May 26 12:55:59.251: INFO: Pod "downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062730839s May 26 12:56:01.254: INFO: Pod "downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06627412s May 26 12:56:03.257: INFO: Pod "downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069133561s May 26 12:56:05.319: INFO: Pod "downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.131246024s May 26 12:56:07.322: INFO: Pod "downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.133968328s STEP: Saw pod success May 26 12:56:07.322: INFO: Pod "downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:56:07.324: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 12:56:07.355: INFO: Waiting for pod downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018 to disappear May 26 12:56:07.390: INFO: Pod downwardapi-volume-3b71180f-9f50-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:56:07.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6w6jt" for this suite. May 26 12:56:13.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:56:13.465: INFO: namespace: e2e-tests-downward-api-6w6jt, resource: bindings, ignored listing per whitelist May 26 12:56:13.504: INFO: namespace e2e-tests-downward-api-6w6jt deletion completed in 6.108490326s • [SLOW TEST:20.466 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:56:13.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 26 12:56:13.590: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:56:32.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-nqjtk" for this suite. May 26 12:56:38.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:56:38.534: INFO: namespace: e2e-tests-init-container-nqjtk, resource: bindings, ignored listing per whitelist May 26 12:56:38.561: INFO: namespace e2e-tests-init-container-nqjtk deletion completed in 6.286328209s • [SLOW TEST:25.057 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:56:38.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 26 12:56:38.692: INFO: Waiting up to 5m0s for pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-wc2sh" to be "success or failure" May 26 12:56:38.763: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 70.006496ms May 26 12:56:40.828: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135876459s May 26 12:56:42.833: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140046032s May 26 12:56:44.836: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143097035s May 26 12:56:46.838: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145757942s May 26 12:56:48.842: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.149008415s May 26 12:56:50.845: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.152581613s May 26 12:56:52.848: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.155913206s May 26 12:56:54.852: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.159656126s May 26 12:56:56.855: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.16291787s May 26 12:56:58.888: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.195771132s May 26 12:57:00.892: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.199009778s STEP: Saw pod success May 26 12:57:00.892: INFO: Pod "pod-5692a9ae-9f50-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:57:00.894: INFO: Trying to get logs from node hunter-worker2 pod pod-5692a9ae-9f50-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 12:57:00.912: INFO: Waiting for pod pod-5692a9ae-9f50-11ea-b1d1-0242ac110018 to disappear May 26 12:57:00.932: INFO: Pod pod-5692a9ae-9f50-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:57:00.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wc2sh" for this suite. May 26 12:57:06.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:57:06.991: INFO: namespace: e2e-tests-emptydir-wc2sh, resource: bindings, ignored listing per whitelist May 26 12:57:07.042: INFO: namespace e2e-tests-emptydir-wc2sh deletion completed in 6.107619952s • [SLOW TEST:28.481 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:57:07.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-vxlg7/configmap-test-6789ca99-9f50-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 12:57:07.189: INFO: Waiting up to 5m0s for pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018" in namespace "e2e-tests-configmap-vxlg7" to be "success or failure" May 26 12:57:07.206: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.713488ms May 26 12:57:09.209: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020197873s May 26 12:57:11.212: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023196268s May 26 12:57:13.216: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026868686s May 26 12:57:15.219: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029929089s May 26 12:57:17.225: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.036237627s May 26 12:57:19.227: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.038509999s May 26 12:57:21.231: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.041813336s May 26 12:57:23.234: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.044778116s May 26 12:57:25.237: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.048180393s May 26 12:57:27.240: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.050946729s May 26 12:57:29.243: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.054066708s May 26 12:57:31.247: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 24.057727933s May 26 12:57:33.250: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.060869081s May 26 12:57:35.253: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.064500387s May 26 12:57:37.278: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.089440144s May 26 12:57:39.281: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.092143555s May 26 12:57:41.284: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.094712826s May 26 12:57:43.296: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 36.107215924s May 26 12:57:45.300: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.111052381s STEP: Saw pod success May 26 12:57:45.300: INFO: Pod "pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:57:45.302: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018 container env-test: STEP: delete the pod May 26 12:57:45.345: INFO: Waiting for pod pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018 to disappear May 26 12:57:45.350: INFO: Pod pod-configmaps-678be4d8-9f50-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:57:45.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vxlg7" for this suite. May 26 12:57:51.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:57:51.367: INFO: namespace: e2e-tests-configmap-vxlg7, resource: bindings, ignored listing per whitelist May 26 12:57:51.446: INFO: namespace e2e-tests-configmap-vxlg7 deletion completed in 6.094227761s • [SLOW TEST:44.404 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:57:51.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-p7njv I0526 12:57:51.576826 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-p7njv, replica count: 1 I0526 12:57:52.627172 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:57:53.627339 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:57:54.627546 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:57:55.627758 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:57:56.627925 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:57:57.628094 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:57:58.628267 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:57:59.628475 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:58:00.628687 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:58:01.628888 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:58:02.629089 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:58:03.629436 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:58:04.629559 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:58:05.629740 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:58:06.629926 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 12:58:07.630145 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 12:58:07.761: INFO: Created: latency-svc-fb2gc May 26 12:58:07.779: INFO: Got endpoints: latency-svc-fb2gc [49.1936ms] May 26 12:58:07.809: INFO: Created: latency-svc-5h4p8 May 26 12:58:07.859: INFO: Got endpoints: latency-svc-5h4p8 [79.997692ms] May 26 12:58:07.875: INFO: Created: latency-svc-hwrd7 May 26 12:58:07.884: INFO: Got endpoints: latency-svc-hwrd7 [104.862579ms] May 26 12:58:07.911: INFO: Created: latency-svc-qdstm May 26 12:58:07.921: INFO: Got endpoints: latency-svc-qdstm [141.191464ms] May 26 12:58:07.949: INFO: Created: latency-svc-4nk8t May 26 12:58:07.997: INFO: Got endpoints: latency-svc-4nk8t [217.890159ms] May 26 12:58:08.038: INFO: Created: latency-svc-hrc2s May 26 12:58:08.089: INFO: Got endpoints: latency-svc-hrc2s [310.053229ms] May 26 12:58:08.142: INFO: Created: latency-svc-vmfq7 May 26 12:58:08.149: INFO: Got endpoints: latency-svc-vmfq7 [370.322753ms] May 26 12:58:08.218: INFO: Created: latency-svc-24hdf May 26 12:58:08.234: INFO: Got endpoints: latency-svc-24hdf [454.45181ms] May 26 12:58:08.289: INFO: Created: latency-svc-fdfmb May 26 12:58:08.306: INFO: Got endpoints: latency-svc-fdfmb [526.770132ms] May 26 12:58:08.337: INFO: Created: latency-svc-9pvxg May 26 12:58:08.348: INFO: Got endpoints: latency-svc-9pvxg [568.929617ms] May 26 12:58:08.372: INFO: Created: latency-svc-cb48b May 26 12:58:08.572: INFO: Got endpoints: latency-svc-cb48b [793.024677ms] May 26 12:58:08.968: INFO: Created: latency-svc-v5lp5 May 26 12:58:09.134: INFO: Got endpoints: latency-svc-v5lp5 [1.354587083s] May 26 12:58:09.171: INFO: Created: latency-svc-xjccf May 26 12:58:09.189: INFO: Got endpoints: latency-svc-xjccf [1.409665182s] May 26 12:58:09.278: INFO: Created: latency-svc-bb9xh May 26 12:58:09.303: INFO: Got endpoints: latency-svc-bb9xh [1.523678209s] May 26 12:58:09.338: INFO: Created: latency-svc-6vffp May 26 12:58:09.393: INFO: Got endpoints: latency-svc-6vffp [1.613107051s] May 26 12:58:09.398: INFO: Created: latency-svc-vx4p9 May 26 12:58:09.419: INFO: Got endpoints: latency-svc-vx4p9 [1.63919346s] May 26 12:58:09.860: INFO: Created: latency-svc-gzmpk May 26 12:58:09.863: INFO: Got endpoints: latency-svc-gzmpk [2.003609811s] May 26 12:58:09.946: INFO: Created: latency-svc-mj42d May 26 12:58:09.997: INFO: Got endpoints: latency-svc-mj42d [2.112712343s] May 26 12:58:10.012: INFO: Created: latency-svc-pk6pl May 26 12:58:10.048: INFO: Got endpoints: latency-svc-pk6pl [2.12705494s] May 26 12:58:10.071: INFO: Created: latency-svc-c7cnd May 26 12:58:10.090: INFO: Got endpoints: latency-svc-c7cnd [2.092927419s] May 26 12:58:10.135: INFO: Created: latency-svc-4slkt May 26 12:58:10.144: INFO: Got endpoints: latency-svc-4slkt [2.054539296s] May 26 12:58:10.167: INFO: Created: latency-svc-gzgwj May 26 12:58:10.184: INFO: Got endpoints: latency-svc-gzgwj [2.034348893s] May 26 12:58:10.215: INFO: Created: latency-svc-txqzr May 26 12:58:10.229: INFO: Got endpoints: latency-svc-txqzr [1.995239456s] May 26 12:58:10.285: INFO: Created: latency-svc-rmwg7 May 26 12:58:10.295: INFO: Got endpoints: latency-svc-rmwg7 [1.989099576s] May 26 12:58:10.354: INFO: Created: latency-svc-4x7x2 May 26 12:58:10.434: INFO: Got endpoints: latency-svc-4x7x2 [2.085958228s] May 26 12:58:24.399: INFO: Created: latency-svc-bfj2m May 26 12:58:24.475: INFO: Created: latency-svc-67kr8 May 26 12:58:24.710: INFO: Created: latency-svc-ctnqc May 26 12:58:24.711: INFO: Got endpoints: latency-svc-bfj2m [16.138175521s] May 26 12:58:24.715: INFO: Got endpoints: latency-svc-ctnqc [15.526387409s] May 26 12:58:24.796: INFO: Got endpoints: latency-svc-67kr8 [15.661517358s] May 26 12:58:24.800: INFO: Created: latency-svc-6hw2t May 26 12:58:24.803: INFO: Got endpoints: latency-svc-6hw2t [15.499849599s] May 26 12:58:24.873: INFO: Created: latency-svc-7vcs2 May 26 12:58:24.879: INFO: Got endpoints: latency-svc-7vcs2 [15.486698589s] May 26 12:58:24.914: INFO: Created: latency-svc-95p54 May 26 12:58:24.934: INFO: Got endpoints: latency-svc-95p54 [15.515256073s] May 26 12:58:24.970: INFO: Created: latency-svc-p7m5x May 26 12:58:25.009: INFO: Got endpoints: latency-svc-p7m5x [15.146303446s] May 26 12:58:25.023: INFO: Created: latency-svc-qlfmp May 26 12:58:25.032: INFO: Got endpoints: latency-svc-qlfmp [15.034770897s] May 26 12:58:25.078: INFO: Created: latency-svc-v8kzq May 26 12:58:25.085: INFO: Got endpoints: latency-svc-v8kzq [15.037791308s] May 26 12:58:25.141: INFO: Created: latency-svc-ljt8d May 26 12:58:25.146: INFO: Got endpoints: latency-svc-ljt8d [15.055442766s] May 26 12:58:25.176: INFO: Created: latency-svc-hjcs2 May 26 12:58:25.182: INFO: Got endpoints: latency-svc-hjcs2 [15.03791977s] May 26 12:58:25.217: INFO: Created: latency-svc-998ph May 26 12:58:25.230: INFO: Got endpoints: latency-svc-998ph [15.046263712s] May 26 12:58:25.333: INFO: Created: latency-svc-gvbn5 May 26 12:58:25.355: INFO: Got endpoints: latency-svc-gvbn5 [15.125888034s] May 26 12:58:25.392: INFO: Created: latency-svc-g4mzx May 26 12:58:25.411: INFO: Got endpoints: latency-svc-g4mzx [15.11594881s] May 26 12:58:25.471: INFO: Created: latency-svc-7nzwh May 26 12:58:25.474: INFO: Got endpoints: latency-svc-7nzwh [15.039882253s] May 26 12:58:25.520: INFO: Created: latency-svc-5zhjd May 26 12:58:25.544: INFO: Got endpoints: latency-svc-5zhjd [833.520109ms] May 26 12:58:25.646: INFO: Created: latency-svc-5bklq May 26 12:58:25.676: INFO: Got endpoints: latency-svc-5bklq [960.415258ms] May 26 12:58:25.740: INFO: Created: latency-svc-nd9kr May 26 12:58:25.802: INFO: Got endpoints: latency-svc-nd9kr [1.006729119s] May 26 12:58:25.856: INFO: Created: latency-svc-jrkqc May 26 12:58:25.881: INFO: Got endpoints: latency-svc-jrkqc [1.078018366s] May 26 12:58:25.962: INFO: Created: latency-svc-jhfpd May 26 12:58:25.965: INFO: Got endpoints: latency-svc-jhfpd [1.085064203s] May 26 12:58:26.048: INFO: Created: latency-svc-d6wzp May 26 12:58:26.108: INFO: Got endpoints: latency-svc-d6wzp [1.17375959s] May 26 12:58:26.194: INFO: Created: latency-svc-z6j92 May 26 12:58:26.261: INFO: Got endpoints: latency-svc-z6j92 [1.251584703s] May 26 12:58:26.311: INFO: Created: latency-svc-5x66w May 26 12:58:26.338: INFO: Got endpoints: latency-svc-5x66w [1.305972825s] May 26 12:58:26.411: INFO: Created: latency-svc-tk5r9 May 26 12:58:26.422: INFO: Got endpoints: latency-svc-tk5r9 [1.336465563s] May 26 12:58:26.450: INFO: Created: latency-svc-bxvkz May 26 12:58:26.458: INFO: Got endpoints: latency-svc-bxvkz [1.31249292s] May 26 12:58:26.487: INFO: Created: latency-svc-vx2gd May 26 12:58:26.495: INFO: Got endpoints: latency-svc-vx2gd [1.31261072s] May 26 12:58:26.566: INFO: Created: latency-svc-5mz5c May 26 12:58:26.573: INFO: Got endpoints: latency-svc-5mz5c [1.343049003s] May 26 12:58:26.605: INFO: Created: latency-svc-4smhl May 26 12:58:26.622: INFO: Got endpoints: latency-svc-4smhl [1.266857153s] May 26 12:58:26.649: INFO: Created: latency-svc-d9bwt May 26 12:58:26.664: INFO: Got endpoints: latency-svc-d9bwt [1.25281419s] May 26 12:58:26.722: INFO: Created: latency-svc-t95mj May 26 12:58:26.730: INFO: Got endpoints: latency-svc-t95mj [1.256324251s] May 26 12:58:26.763: INFO: Created: latency-svc-jrrzp May 26 12:58:26.779: INFO: Got endpoints: latency-svc-jrrzp [1.234969081s] May 26 12:58:26.884: INFO: Created: latency-svc-kkgf8 May 26 12:58:26.887: INFO: Got endpoints: latency-svc-kkgf8 [1.210537398s] May 26 12:58:26.931: INFO: Created: latency-svc-n4nfn May 26 12:58:26.948: INFO: Got endpoints: latency-svc-n4nfn [1.145379494s] May 26 12:58:27.030: INFO: Created: latency-svc-qdmts May 26 12:58:27.038: INFO: Got endpoints: latency-svc-qdmts [1.156920757s] May 26 12:58:27.178: INFO: Created: latency-svc-2wqtm May 26 12:58:27.183: INFO: Got endpoints: latency-svc-2wqtm [1.218201702s] May 26 12:58:27.219: INFO: Created: latency-svc-lgc2w May 26 12:58:27.236: INFO: Got endpoints: latency-svc-lgc2w [1.128450559s] May 26 12:58:27.339: INFO: Created: latency-svc-b8qw5 May 26 12:58:27.347: INFO: Got endpoints: latency-svc-b8qw5 [1.085629492s] May 26 12:58:27.401: INFO: Created: latency-svc-cndjg May 26 12:58:27.438: INFO: Got endpoints: latency-svc-cndjg [1.100298253s] May 26 12:58:27.508: INFO: Created: latency-svc-t7c8c May 26 12:58:27.531: INFO: Got endpoints: latency-svc-t7c8c [1.109299026s] May 26 12:58:27.563: INFO: Created: latency-svc-5qld6 May 26 12:58:27.716: INFO: Got endpoints: latency-svc-5qld6 [1.257837813s] May 26 12:58:27.772: INFO: Created: latency-svc-58qfh May 26 12:58:27.796: INFO: Got endpoints: latency-svc-58qfh [1.30139427s] May 26 12:58:27.905: INFO: Created: latency-svc-b7kfd May 26 12:58:27.922: INFO: Got endpoints: latency-svc-b7kfd [1.349005008s] May 26 12:58:27.953: INFO: Created: latency-svc-j66pc May 26 12:58:27.971: INFO: Got endpoints: latency-svc-j66pc [1.348729512s] May 26 12:58:28.033: INFO: Created: latency-svc-xdlkz May 26 12:58:28.079: INFO: Got endpoints: latency-svc-xdlkz [1.41465882s] May 26 12:58:28.079: INFO: Created: latency-svc-xxnsf May 26 12:58:28.121: INFO: Got endpoints: latency-svc-xxnsf [1.390102533s] May 26 12:58:28.189: INFO: Created: latency-svc-clpdd May 26 12:58:28.193: INFO: Got endpoints: latency-svc-clpdd [1.414361783s] May 26 12:58:28.222: INFO: Created: latency-svc-cwd4w May 26 12:58:28.236: INFO: Got endpoints: latency-svc-cwd4w [1.349211502s] May 26 12:58:28.276: INFO: Created: latency-svc-ns8dg May 26 12:58:28.326: INFO: Got endpoints: latency-svc-ns8dg [1.378555639s] May 26 12:58:28.343: INFO: Created: latency-svc-ns9sm May 26 12:58:28.363: INFO: Got endpoints: latency-svc-ns9sm [1.324613256s] May 26 12:58:28.420: INFO: Created: latency-svc-hp7gt May 26 12:58:28.476: INFO: Got endpoints: latency-svc-hp7gt [1.293470187s] May 26 12:58:28.498: INFO: Created: latency-svc-zrpff May 26 12:58:28.514: INFO: Got endpoints: latency-svc-zrpff [1.277290621s] May 26 12:58:28.533: INFO: Created: latency-svc-8wmc8 May 26 12:58:28.550: INFO: Got endpoints: latency-svc-8wmc8 [1.203829276s] May 26 12:58:28.569: INFO: Created: latency-svc-fl7rz May 26 12:58:28.626: INFO: Got endpoints: latency-svc-fl7rz [1.187928694s] May 26 12:58:28.655: INFO: Created: latency-svc-9w8f9 May 26 12:58:28.666: INFO: Got endpoints: latency-svc-9w8f9 [1.134968345s] May 26 12:58:28.698: INFO: Created: latency-svc-gfmkh May 26 12:58:28.713: INFO: Got endpoints: latency-svc-gfmkh [997.374605ms] May 26 12:58:28.776: INFO: Created: latency-svc-fxfk9 May 26 12:58:28.778: INFO: Got endpoints: latency-svc-fxfk9 [982.39609ms] May 26 12:58:28.821: INFO: Created: latency-svc-lrwhn May 26 12:58:28.840: INFO: Got endpoints: latency-svc-lrwhn [917.528129ms] May 26 12:58:28.872: INFO: Created: latency-svc-29l6j May 26 12:58:28.938: INFO: Got endpoints: latency-svc-29l6j [966.82387ms] May 26 12:58:28.992: INFO: Created: latency-svc-xw7c4 May 26 12:58:29.034: INFO: Got endpoints: latency-svc-xw7c4 [955.019276ms] May 26 12:58:29.101: INFO: Created: latency-svc-zpnvv May 26 12:58:29.129: INFO: Got endpoints: latency-svc-zpnvv [1.008408678s] May 26 12:58:29.159: INFO: Created: latency-svc-9ghdb May 26 12:58:29.177: INFO: Got endpoints: latency-svc-9ghdb [983.706087ms] May 26 12:58:29.258: INFO: Created: latency-svc-twdm4 May 26 12:58:29.258: INFO: Got endpoints: latency-svc-twdm4 [1.021860857s] May 26 12:58:29.340: INFO: Created: latency-svc-xlxtd May 26 12:58:29.441: INFO: Got endpoints: latency-svc-xlxtd [1.114228806s] May 26 12:58:29.496: INFO: Created: latency-svc-4bd8b May 26 12:58:29.533: INFO: Got endpoints: latency-svc-4bd8b [1.170013529s] May 26 12:58:29.626: INFO: Created: latency-svc-hgg5w May 26 12:58:29.639: INFO: Got endpoints: latency-svc-hgg5w [1.1627829s] May 26 12:58:29.684: INFO: Created: latency-svc-nx5vv May 26 12:58:29.723: INFO: Got endpoints: latency-svc-nx5vv [1.209697821s] May 26 12:58:29.809: INFO: Created: latency-svc-88mzw May 26 12:58:29.820: INFO: Got endpoints: latency-svc-88mzw [1.269348852s] May 26 12:58:29.859: INFO: Created: latency-svc-wvm8p May 26 12:58:29.875: INFO: Got endpoints: latency-svc-wvm8p [1.248345216s] May 26 12:58:29.938: INFO: Created: latency-svc-6gbdf May 26 12:58:29.940: INFO: Got endpoints: latency-svc-6gbdf [1.273826359s] May 26 12:58:30.115: INFO: Created: latency-svc-5f4pt May 26 12:58:30.115: INFO: Got endpoints: latency-svc-5f4pt [1.401897621s] May 26 12:58:30.166: INFO: Created: latency-svc-smgd7 May 26 12:58:30.202: INFO: Got endpoints: latency-svc-smgd7 [1.423326555s] May 26 12:58:30.267: INFO: Created: latency-svc-ngscg May 26 12:58:30.277: INFO: Got endpoints: latency-svc-ngscg [1.437500061s] May 26 12:58:30.316: INFO: Created: latency-svc-gj7zz May 26 12:58:30.332: INFO: Got endpoints: latency-svc-gj7zz [1.394000896s] May 26 12:58:30.453: INFO: Created: latency-svc-6hcb6 May 26 12:58:30.455: INFO: Got endpoints: latency-svc-6hcb6 [1.421339652s] May 26 12:58:30.489: INFO: Created: latency-svc-kvrq9 May 26 12:58:30.507: INFO: Got endpoints: latency-svc-kvrq9 [1.377496976s] May 26 12:58:30.531: INFO: Created: latency-svc-8fbrz May 26 12:58:30.543: INFO: Got endpoints: latency-svc-8fbrz [1.365400445s] May 26 12:58:30.591: INFO: Created: latency-svc-wq5sc May 26 12:58:30.611: INFO: Got endpoints: latency-svc-wq5sc [1.353709055s] May 26 12:58:30.635: INFO: Created: latency-svc-fch5m May 26 12:58:30.651: INFO: Got endpoints: latency-svc-fch5m [1.210614769s] May 26 12:58:30.677: INFO: Created: latency-svc-dnqpw May 26 12:58:30.728: INFO: Got endpoints: latency-svc-dnqpw [1.195050692s] May 26 12:58:30.759: INFO: Created: latency-svc-gfk7c May 26 12:58:30.782: INFO: Got endpoints: latency-svc-gfk7c [1.143297453s] May 26 12:58:30.813: INFO: Created: latency-svc-wfxk7 May 26 12:58:30.827: INFO: Got endpoints: latency-svc-wfxk7 [1.103633676s] May 26 12:58:30.896: INFO: Created: latency-svc-26tz7 May 26 12:58:30.905: INFO: Got endpoints: latency-svc-26tz7 [1.08515153s] May 26 12:58:30.934: INFO: Created: latency-svc-x4wtx May 26 12:58:30.971: INFO: Got endpoints: latency-svc-x4wtx [1.096780624s] May 26 12:58:31.052: INFO: Created: latency-svc-vr8tq May 26 12:58:31.065: INFO: Got endpoints: latency-svc-vr8tq [1.124502296s] May 26 12:58:31.102: INFO: Created: latency-svc-428ng May 26 12:58:31.116: INFO: Got endpoints: latency-svc-428ng [1.00099952s] May 26 12:58:31.151: INFO: Created: latency-svc-nx6cw May 26 12:58:31.201: INFO: Got endpoints: latency-svc-nx6cw [998.949236ms] May 26 12:58:31.234: INFO: Created: latency-svc-zzc4s May 26 12:58:31.237: INFO: Got endpoints: latency-svc-zzc4s [959.365701ms] May 26 12:58:31.282: INFO: Created: latency-svc-twgft May 26 12:58:31.345: INFO: Got endpoints: latency-svc-twgft [1.013276895s] May 26 12:58:31.359: INFO: Created: latency-svc-5m4cv May 26 12:58:31.370: INFO: Got endpoints: latency-svc-5m4cv [914.234866ms] May 26 12:58:31.402: INFO: Created: latency-svc-n87vt May 26 12:58:31.418: INFO: Got endpoints: latency-svc-n87vt [911.162029ms] May 26 12:58:31.438: INFO: Created: latency-svc-lw9wd May 26 12:58:31.488: INFO: Got endpoints: latency-svc-lw9wd [945.657607ms] May 26 12:58:31.504: INFO: Created: latency-svc-hfqpk May 26 12:58:31.564: INFO: Got endpoints: latency-svc-hfqpk [952.974931ms] May 26 12:58:31.629: INFO: Created: latency-svc-cz7r6 May 26 12:58:31.648: INFO: Got endpoints: latency-svc-cz7r6 [996.092183ms] May 26 12:58:31.678: INFO: Created: latency-svc-nqvvj May 26 12:58:31.696: INFO: Got endpoints: latency-svc-nqvvj [967.914848ms] May 26 12:58:31.775: INFO: Created: latency-svc-8wg6f May 26 12:58:31.792: INFO: Got endpoints: latency-svc-8wg6f [1.009081809s] May 26 12:58:31.843: INFO: Created: latency-svc-jspdz May 26 12:58:31.955: INFO: Got endpoints: latency-svc-jspdz [1.128123854s] May 26 12:58:31.957: INFO: Created: latency-svc-9vrsb May 26 12:58:31.966: INFO: Got endpoints: latency-svc-9vrsb [1.061226807s] May 26 12:58:32.006: INFO: Created: latency-svc-bdggz May 26 12:58:32.015: INFO: Got endpoints: latency-svc-bdggz [1.043309476s] May 26 12:58:32.105: INFO: Created: latency-svc-b6x8v May 26 12:58:32.132: INFO: Got endpoints: latency-svc-b6x8v [1.06728676s] May 26 12:58:32.174: INFO: Created: latency-svc-hsqdx May 26 12:58:32.189: INFO: Got endpoints: latency-svc-hsqdx [1.073002267s] May 26 12:58:32.255: INFO: Created: latency-svc-p5294 May 26 12:58:32.258: INFO: Got endpoints: latency-svc-p5294 [1.056942809s] May 26 12:58:32.293: INFO: Created: latency-svc-jnf9c May 26 12:58:32.310: INFO: Got endpoints: latency-svc-jnf9c [1.073089321s] May 26 12:58:32.335: INFO: Created: latency-svc-kvlw6 May 26 12:58:32.346: INFO: Got endpoints: latency-svc-kvlw6 [1.001219237s] May 26 12:58:32.408: INFO: Created: latency-svc-xjgnz May 26 12:58:32.425: INFO: Got endpoints: latency-svc-xjgnz [1.055173192s] May 26 12:58:32.450: INFO: Created: latency-svc-5bt7p May 26 12:58:32.474: INFO: Got endpoints: latency-svc-5bt7p [1.05557849s] May 26 12:58:32.497: INFO: Created: latency-svc-f5n7z May 26 12:58:32.578: INFO: Got endpoints: latency-svc-f5n7z [1.089527029s] May 26 12:58:32.580: INFO: Created: latency-svc-lkv29 May 26 12:58:32.588: INFO: Got endpoints: latency-svc-lkv29 [1.023457521s] May 26 12:58:32.655: INFO: Created: latency-svc-wng6m May 26 12:58:32.722: INFO: Got endpoints: latency-svc-wng6m [1.074120988s] May 26 12:58:32.755: INFO: Created: latency-svc-kc79m May 26 12:58:32.787: INFO: Got endpoints: latency-svc-kc79m [1.09084992s] May 26 12:58:32.810: INFO: Created: latency-svc-kpwmz May 26 12:58:32.866: INFO: Got endpoints: latency-svc-kpwmz [1.073946634s] May 26 12:58:32.876: INFO: Created: latency-svc-sbm5h May 26 12:58:32.890: INFO: Got endpoints: latency-svc-sbm5h [934.505562ms] May 26 12:58:32.930: INFO: Created: latency-svc-fh9lw May 26 12:58:32.962: INFO: Got endpoints: latency-svc-fh9lw [995.437426ms] May 26 12:58:33.039: INFO: Created: latency-svc-gn2xj May 26 12:58:33.047: INFO: Got endpoints: latency-svc-gn2xj [1.031925128s] May 26 12:58:33.069: INFO: Created: latency-svc-84r7t May 26 12:58:33.089: INFO: Got endpoints: latency-svc-84r7t [956.471332ms] May 26 12:58:33.122: INFO: Created: latency-svc-ccgwl May 26 12:58:33.137: INFO: Got endpoints: latency-svc-ccgwl [947.286163ms] May 26 12:58:33.191: INFO: Created: latency-svc-llqsc May 26 12:58:33.197: INFO: Got endpoints: latency-svc-llqsc [939.076044ms] May 26 12:58:33.231: INFO: Created: latency-svc-2zdd2 May 26 12:58:33.246: INFO: Got endpoints: latency-svc-2zdd2 [935.771482ms] May 26 12:58:33.272: INFO: Created: latency-svc-m5c2l May 26 12:58:33.288: INFO: Got endpoints: latency-svc-m5c2l [941.799571ms] May 26 12:58:33.351: INFO: Created: latency-svc-zl7f6 May 26 12:58:33.380: INFO: Got endpoints: latency-svc-zl7f6 [955.651996ms] May 26 12:58:33.435: INFO: Created: latency-svc-lzgkn May 26 12:58:33.494: INFO: Got endpoints: latency-svc-lzgkn [1.020849495s] May 26 12:58:33.524: INFO: Created: latency-svc-v2gwb May 26 12:58:33.547: INFO: Got endpoints: latency-svc-v2gwb [969.085871ms] May 26 12:58:33.573: INFO: Created: latency-svc-fvfgb May 26 12:58:33.680: INFO: Got endpoints: latency-svc-fvfgb [1.091841154s] May 26 12:58:33.687: INFO: Created: latency-svc-gqp6c May 26 12:58:33.706: INFO: Got endpoints: latency-svc-gqp6c [983.724787ms] May 26 12:58:33.763: INFO: Created: latency-svc-6tc7p May 26 12:58:33.831: INFO: Got endpoints: latency-svc-6tc7p [1.044493211s] May 26 12:58:33.867: INFO: Created: latency-svc-vb55c May 26 12:58:33.899: INFO: Got endpoints: latency-svc-vb55c [1.032861562s] May 26 12:58:33.968: INFO: Created: latency-svc-2trsd May 26 12:58:34.023: INFO: Got endpoints: latency-svc-2trsd [1.132900485s] May 26 12:58:34.113: INFO: Created: latency-svc-mxsn6 May 26 12:58:34.144: INFO: Got endpoints: latency-svc-mxsn6 [1.182498817s] May 26 12:58:34.179: INFO: Created: latency-svc-8qkdv May 26 12:58:34.267: INFO: Got endpoints: latency-svc-8qkdv [1.220024847s] May 26 12:58:34.268: INFO: Created: latency-svc-jfb7f May 26 12:58:34.277: INFO: Got endpoints: latency-svc-jfb7f [1.188060895s] May 26 12:58:34.359: INFO: Created: latency-svc-lmvxz May 26 12:58:34.417: INFO: Got endpoints: latency-svc-lmvxz [1.280563557s] May 26 12:58:34.474: INFO: Created: latency-svc-f48t7 May 26 12:58:34.512: INFO: Got endpoints: latency-svc-f48t7 [1.314606631s] May 26 12:58:34.567: INFO: Created: latency-svc-p9kf2 May 26 12:58:34.572: INFO: Got endpoints: latency-svc-p9kf2 [1.325754499s] May 26 12:58:34.598: INFO: Created: latency-svc-qqsbj May 26 12:58:34.614: INFO: Got endpoints: latency-svc-qqsbj [1.326232153s] May 26 12:58:34.659: INFO: Created: latency-svc-vhdw8 May 26 12:58:34.705: INFO: Got endpoints: latency-svc-vhdw8 [1.324642084s] May 26 12:58:34.732: INFO: Created: latency-svc-cwl5m May 26 12:58:34.753: INFO: Got endpoints: latency-svc-cwl5m [1.258621602s] May 26 12:58:34.790: INFO: Created: latency-svc-zsgwj May 26 12:58:34.856: INFO: Created: latency-svc-vbkfr May 26 12:58:34.898: INFO: Created: latency-svc-p5z59 May 26 12:58:34.898: INFO: Got endpoints: latency-svc-zsgwj [1.350840835s] May 26 12:58:34.916: INFO: Got endpoints: latency-svc-p5z59 [1.210871105s] May 26 12:58:35.004: INFO: Got endpoints: latency-svc-vbkfr [1.32372805s] May 26 12:58:35.004: INFO: Created: latency-svc-zl2xk May 26 12:58:35.351: INFO: Got endpoints: latency-svc-zl2xk [1.519264225s] May 26 12:58:35.360: INFO: Created: latency-svc-czwmg May 26 12:58:35.373: INFO: Got endpoints: latency-svc-czwmg [1.474020367s] May 26 12:58:35.397: INFO: Created: latency-svc-slqst May 26 12:58:35.402: INFO: Got endpoints: latency-svc-slqst [1.379647537s] May 26 12:58:35.432: INFO: Created: latency-svc-n2shv May 26 12:58:35.501: INFO: Got endpoints: latency-svc-n2shv [1.356547197s] May 26 12:58:35.535: INFO: Created: latency-svc-ls2n7 May 26 12:58:35.554: INFO: Got endpoints: latency-svc-ls2n7 [1.286795604s] May 26 12:58:35.595: INFO: Created: latency-svc-rs9tk May 26 12:58:35.764: INFO: Got endpoints: latency-svc-rs9tk [1.487594918s] May 26 12:58:35.791: INFO: Created: latency-svc-xhltk May 26 12:58:35.824: INFO: Got endpoints: latency-svc-xhltk [1.40697627s] May 26 12:58:37.130: INFO: Created: latency-svc-pz9lm May 26 12:58:37.134: INFO: Got endpoints: latency-svc-pz9lm [2.622486051s] May 26 12:58:37.206: INFO: Created: latency-svc-sbvld May 26 12:58:37.351: INFO: Got endpoints: latency-svc-sbvld [2.779250733s] May 26 12:58:37.362: INFO: Created: latency-svc-pkrgf May 26 12:58:37.413: INFO: Got endpoints: latency-svc-pkrgf [2.798468644s] May 26 12:58:37.574: INFO: Created: latency-svc-j9fnd May 26 12:58:37.592: INFO: Got endpoints: latency-svc-j9fnd [2.887009741s] May 26 12:58:37.664: INFO: Created: latency-svc-5ngk4 May 26 12:58:37.689: INFO: Got endpoints: latency-svc-5ngk4 [2.935851112s] May 26 12:58:37.716: INFO: Created: latency-svc-xg2kg May 26 12:58:37.731: INFO: Got endpoints: latency-svc-xg2kg [2.833134372s] May 26 12:58:37.812: INFO: Created: latency-svc-zwjph May 26 12:58:37.821: INFO: Got endpoints: latency-svc-zwjph [2.904850974s] May 26 12:58:37.851: INFO: Created: latency-svc-qpx6k May 26 12:58:37.864: INFO: Got endpoints: latency-svc-qpx6k [2.860279081s] May 26 12:58:37.891: INFO: Created: latency-svc-xsrwf May 26 12:58:37.907: INFO: Got endpoints: latency-svc-xsrwf [2.556449762s] May 26 12:58:37.987: INFO: Created: latency-svc-btktp May 26 12:58:38.022: INFO: Got endpoints: latency-svc-btktp [2.648795713s] May 26 12:58:38.112: INFO: Created: latency-svc-hfzfn May 26 12:58:38.117: INFO: Got endpoints: latency-svc-hfzfn [2.714659604s] May 26 12:58:38.173: INFO: Created: latency-svc-whwk6 May 26 12:58:38.202: INFO: Got endpoints: latency-svc-whwk6 [2.700815075s] May 26 12:58:38.261: INFO: Created: latency-svc-kl2wc May 26 12:58:38.264: INFO: Got endpoints: latency-svc-kl2wc [2.710636712s] May 26 12:58:38.294: INFO: Created: latency-svc-xml5j May 26 12:58:38.305: INFO: Got endpoints: latency-svc-xml5j [2.540105973s] May 26 12:58:38.334: INFO: Created: latency-svc-sblzb May 26 12:58:38.353: INFO: Got endpoints: latency-svc-sblzb [2.528041196s] May 26 12:58:38.429: INFO: Created: latency-svc-jggb9 May 26 12:58:38.437: INFO: Got endpoints: latency-svc-jggb9 [1.302975452s] May 26 12:58:38.461: INFO: Created: latency-svc-8hjlh May 26 12:58:38.491: INFO: Got endpoints: latency-svc-8hjlh [1.14030622s] May 26 12:58:38.528: INFO: Created: latency-svc-b46js May 26 12:58:38.578: INFO: Got endpoints: latency-svc-b46js [1.16521101s] May 26 12:58:38.592: INFO: Created: latency-svc-rn7q7 May 26 12:58:38.606: INFO: Got endpoints: latency-svc-rn7q7 [1.013922337s] May 26 12:58:38.634: INFO: Created: latency-svc-lbc6k May 26 12:58:38.648: INFO: Got endpoints: latency-svc-lbc6k [959.395822ms] May 26 12:58:38.670: INFO: Created: latency-svc-j8sbk May 26 12:58:38.716: INFO: Got endpoints: latency-svc-j8sbk [984.472953ms] May 26 12:58:38.749: INFO: Created: latency-svc-lns5j May 26 12:58:38.763: INFO: Got endpoints: latency-svc-lns5j [941.844414ms] May 26 12:58:38.798: INFO: Created: latency-svc-5ws7q May 26 12:58:38.860: INFO: Got endpoints: latency-svc-5ws7q [995.988284ms] May 26 12:58:38.898: INFO: Created: latency-svc-xt5fg May 26 12:58:38.914: INFO: Got endpoints: latency-svc-xt5fg [1.007055264s] May 26 12:58:38.946: INFO: Created: latency-svc-bkbx6 May 26 12:58:39.003: INFO: Got endpoints: latency-svc-bkbx6 [981.615389ms] May 26 12:58:39.021: INFO: Created: latency-svc-nwqsn May 26 12:58:39.043: INFO: Got endpoints: latency-svc-nwqsn [925.962099ms] May 26 12:58:39.096: INFO: Created: latency-svc-7wqhp May 26 12:58:39.162: INFO: Got endpoints: latency-svc-7wqhp [960.277312ms] May 26 12:58:39.199: INFO: Created: latency-svc-tsbhb May 26 12:58:39.210: INFO: Got endpoints: latency-svc-tsbhb [945.309525ms] May 26 12:58:39.259: INFO: Created: latency-svc-6jg96 May 26 12:58:39.345: INFO: Got endpoints: latency-svc-6jg96 [1.039794136s] May 26 12:58:39.347: INFO: Created: latency-svc-569bx May 26 12:58:39.396: INFO: Got endpoints: latency-svc-569bx [1.043444425s] May 26 12:58:39.495: INFO: Created: latency-svc-qnzqg May 26 12:58:39.505: INFO: Got endpoints: latency-svc-qnzqg [1.068227859s] May 26 12:58:39.506: INFO: Latencies: [79.997692ms 104.862579ms 141.191464ms 217.890159ms 310.053229ms 370.322753ms 454.45181ms 526.770132ms 568.929617ms 793.024677ms 833.520109ms 911.162029ms 914.234866ms 917.528129ms 925.962099ms 934.505562ms 935.771482ms 939.076044ms 941.799571ms 941.844414ms 945.309525ms 945.657607ms 947.286163ms 952.974931ms 955.019276ms 955.651996ms 956.471332ms 959.365701ms 959.395822ms 960.277312ms 960.415258ms 966.82387ms 967.914848ms 969.085871ms 981.615389ms 982.39609ms 983.706087ms 983.724787ms 984.472953ms 995.437426ms 995.988284ms 996.092183ms 997.374605ms 998.949236ms 1.00099952s 1.001219237s 1.006729119s 1.007055264s 1.008408678s 1.009081809s 1.013276895s 1.013922337s 1.020849495s 1.021860857s 1.023457521s 1.031925128s 1.032861562s 1.039794136s 1.043309476s 1.043444425s 1.044493211s 1.055173192s 1.05557849s 1.056942809s 1.061226807s 1.06728676s 1.068227859s 1.073002267s 1.073089321s 1.073946634s 1.074120988s 1.078018366s 1.085064203s 1.08515153s 1.085629492s 1.089527029s 1.09084992s 1.091841154s 1.096780624s 1.100298253s 1.103633676s 1.109299026s 1.114228806s 1.124502296s 1.128123854s 1.128450559s 1.132900485s 1.134968345s 1.14030622s 1.143297453s 1.145379494s 1.156920757s 1.1627829s 1.16521101s 1.170013529s 1.17375959s 1.182498817s 1.187928694s 1.188060895s 1.195050692s 1.203829276s 1.209697821s 1.210537398s 1.210614769s 1.210871105s 1.218201702s 1.220024847s 1.234969081s 1.248345216s 1.251584703s 1.25281419s 1.256324251s 1.257837813s 1.258621602s 1.266857153s 1.269348852s 1.273826359s 1.277290621s 1.280563557s 1.286795604s 1.293470187s 1.30139427s 1.302975452s 1.305972825s 1.31249292s 1.31261072s 1.314606631s 1.32372805s 1.324613256s 1.324642084s 1.325754499s 1.326232153s 1.336465563s 1.343049003s 1.348729512s 1.349005008s 1.349211502s 1.350840835s 1.353709055s 1.354587083s 1.356547197s 1.365400445s 1.377496976s 1.378555639s 1.379647537s 1.390102533s 1.394000896s 1.401897621s 1.40697627s 1.409665182s 1.414361783s 1.41465882s 1.421339652s 1.423326555s 1.437500061s 1.474020367s 1.487594918s 1.519264225s 1.523678209s 1.613107051s 1.63919346s 1.989099576s 1.995239456s 2.003609811s 2.034348893s 2.054539296s 2.085958228s 2.092927419s 2.112712343s 2.12705494s 2.528041196s 2.540105973s 2.556449762s 2.622486051s 2.648795713s 2.700815075s 2.710636712s 2.714659604s 2.779250733s 2.798468644s 2.833134372s 2.860279081s 2.887009741s 2.904850974s 2.935851112s 15.034770897s 15.037791308s 15.03791977s 15.039882253s 15.046263712s 15.055442766s 15.11594881s 15.125888034s 15.146303446s 15.486698589s 15.499849599s 15.515256073s 15.526387409s 15.661517358s 16.138175521s] May 26 12:58:39.506: INFO: 50 %ile: 1.203829276s May 26 12:58:39.506: INFO: 90 %ile: 2.833134372s May 26 12:58:39.506: INFO: 99 %ile: 15.661517358s May 26 12:58:39.506: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:58:39.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-p7njv" for this suite. May 26 12:59:19.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:59:19.596: INFO: namespace: e2e-tests-svc-latency-p7njv, resource: bindings, ignored listing per whitelist May 26 12:59:19.632: INFO: namespace e2e-tests-svc-latency-p7njv deletion completed in 40.118095987s • [SLOW TEST:88.186 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:59:19.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-b68e9bb6-9f50-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 12:59:19.747: INFO: Waiting up to 5m0s for pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018" in namespace "e2e-tests-secrets-msbct" to be "success or failure" May 26 12:59:19.764: INFO: Pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.894393ms May 26 12:59:21.767: INFO: Pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01986595s May 26 12:59:23.769: INFO: Pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022334104s May 26 12:59:25.773: INFO: Pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025455218s May 26 12:59:27.776: INFO: Pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028842117s May 26 12:59:29.795: INFO: Pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.04770557s May 26 12:59:31.798: INFO: Pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.051342556s May 26 12:59:33.802: INFO: Pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 14.054704839s May 26 12:59:35.805: INFO: Pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.058357039s STEP: Saw pod success May 26 12:59:35.805: INFO: Pod "pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 12:59:35.808: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018 container secret-volume-test: STEP: delete the pod May 26 12:59:35.849: INFO: Waiting for pod pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018 to disappear May 26 12:59:35.850: INFO: Pod pod-secrets-b68f48d8-9f50-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 12:59:35.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-msbct" for this suite. May 26 12:59:41.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 12:59:41.932: INFO: namespace: e2e-tests-secrets-msbct, resource: bindings, ignored listing per whitelist May 26 12:59:41.957: INFO: namespace e2e-tests-secrets-msbct deletion completed in 6.104737777s • [SLOW TEST:22.325 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 12:59:41.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c3f6953f-9f50-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 12:59:42.245: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018" in namespace "e2e-tests-configmap-6lkhh" to be "success or failure" May 26 12:59:42.278: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.294104ms May 26 12:59:44.281: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036402761s May 26 12:59:46.285: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039576496s May 26 12:59:48.289: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043584401s May 26 12:59:50.292: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047002827s May 26 12:59:52.295: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050456908s May 26 12:59:54.299: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.053951332s May 26 12:59:56.303: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.057618928s May 26 12:59:58.305: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.060334281s May 26 13:00:00.309: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.063593998s May 26 13:00:02.312: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.067191664s May 26 13:00:04.315: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.070282785s May 26 13:00:06.319: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 24.073860863s May 26 13:00:08.323: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.077767801s May 26 13:00:10.326: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.08144437s May 26 13:00:12.330: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.085124286s May 26 13:00:14.333: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.088168812s STEP: Saw pod success May 26 13:00:14.333: INFO: Pod "pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:00:14.336: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018 container configmap-volume-test: STEP: delete the pod May 26 13:00:14.507: INFO: Waiting for pod pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018 to disappear May 26 13:00:14.530: INFO: Pod pod-configmaps-c3fa3887-9f50-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:00:14.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6lkhh" for this suite. May 26 13:00:20.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:00:20.567: INFO: namespace: e2e-tests-configmap-6lkhh, resource: bindings, ignored listing per whitelist May 26 13:00:20.615: INFO: namespace e2e-tests-configmap-6lkhh deletion completed in 6.081887752s • [SLOW TEST:38.657 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:00:20.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 13:00:20.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 26 13:00:20.807: INFO: stderr: "" May 26 13:00:20.807: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 26 13:00:20.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vmdg2' May 26 13:00:21.067: INFO: stderr: "" May 26 13:00:21.067: INFO: stdout: "replicationcontroller/redis-master created\n" May 26 13:00:21.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vmdg2' May 26 13:00:21.315: INFO: stderr: "" May 26 13:00:21.315: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 26 13:00:22.320: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:22.320: INFO: Found 0 / 1 May 26 13:00:23.323: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:23.323: INFO: Found 0 / 1 May 26 13:00:24.342: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:24.342: INFO: Found 0 / 1 May 26 13:00:25.319: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:25.319: INFO: Found 0 / 1 May 26 13:00:26.320: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:26.320: INFO: Found 0 / 1 May 26 13:00:27.319: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:27.319: INFO: Found 0 / 1 May 26 13:00:28.319: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:28.319: INFO: Found 0 / 1 May 26 13:00:29.346: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:29.346: INFO: Found 0 / 1 May 26 13:00:30.318: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:30.318: INFO: Found 0 / 1 May 26 13:00:31.319: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:31.319: INFO: Found 0 / 1 May 26 13:00:32.319: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:32.319: INFO: Found 0 / 1 May 26 13:00:33.319: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:33.319: INFO: Found 0 / 1 May 26 13:00:34.341: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:34.341: INFO: Found 0 / 1 May 26 13:00:35.323: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:35.323: INFO: Found 0 / 1 May 26 13:00:36.319: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:36.319: INFO: Found 0 / 1 May 26 13:00:37.533: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:37.533: INFO: Found 1 / 1 May 26 13:00:37.533: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 26 13:00:37.536: INFO: Selector matched 1 pods for map[app:redis] May 26 13:00:37.536: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 26 13:00:37.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-7mn4x --namespace=e2e-tests-kubectl-vmdg2' May 26 13:00:37.646: INFO: stderr: "" May 26 13:00:37.646: INFO: stdout: "Name: redis-master-7mn4x\nNamespace: e2e-tests-kubectl-vmdg2\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Tue, 26 May 2020 13:00:21 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.247\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://ca549990af0408bc1f707ecfcdf50ae392ee3346f1b30842bc4d6f26b4d7a4d2\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 26 May 2020 13:00:36 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pz7s4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pz7s4:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pz7s4\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 16s default-scheduler Successfully assigned e2e-tests-kubectl-vmdg2/redis-master-7mn4x to hunter-worker\n Normal Pulled 6s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" May 26 13:00:37.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-vmdg2' May 26 13:00:37.776: INFO: stderr: "" May 26 13:00:37.776: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-vmdg2\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 16s replication-controller Created pod: redis-master-7mn4x\n" May 26 13:00:37.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-vmdg2' May 26 13:00:37.878: INFO: stderr: "" May 26 13:00:37.878: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-vmdg2\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.110.33.149\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.247:6379\nSession Affinity: None\nEvents: \n" May 26 13:00:37.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 26 13:00:38.002: INFO: stderr: "" May 26 13:00:38.002: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 26 May 2020 13:00:28 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 26 May 2020 13:00:28 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 26 May 2020 13:00:28 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 26 May 2020 13:00:28 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 71d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 71d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 71d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 71d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 71d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 71d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 71d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 26 13:00:38.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-vmdg2' May 26 13:00:38.092: INFO: stderr: "" May 26 13:00:38.092: INFO: stdout: "Name: e2e-tests-kubectl-vmdg2\nLabels: e2e-framework=kubectl\n e2e-run=362666b9-9f3e-11ea-b1d1-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:00:38.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vmdg2" for this suite. May 26 13:01:00.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:01:00.244: INFO: namespace: e2e-tests-kubectl-vmdg2, resource: bindings, ignored listing per whitelist May 26 13:01:00.257: INFO: namespace e2e-tests-kubectl-vmdg2 deletion completed in 22.161782728s • [SLOW TEST:39.642 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:01:00.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-f2c39eb9-9f50-11ea-b1d1-0242ac110018 May 26 13:01:00.767: INFO: Pod name my-hostname-basic-f2c39eb9-9f50-11ea-b1d1-0242ac110018: Found 0 pods out of 1 May 26 13:01:05.770: INFO: Pod name my-hostname-basic-f2c39eb9-9f50-11ea-b1d1-0242ac110018: Found 1 pods out of 1 May 26 13:01:05.770: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f2c39eb9-9f50-11ea-b1d1-0242ac110018" are running May 26 13:01:15.776: INFO: Pod "my-hostname-basic-f2c39eb9-9f50-11ea-b1d1-0242ac110018-27hxg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 13:01:00 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 13:01:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f2c39eb9-9f50-11ea-b1d1-0242ac110018]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 13:01:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f2c39eb9-9f50-11ea-b1d1-0242ac110018]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 13:01:00 +0000 UTC Reason: Message:}]) May 26 13:01:15.776: INFO: Trying to dial the pod May 26 13:01:20.793: INFO: Controller my-hostname-basic-f2c39eb9-9f50-11ea-b1d1-0242ac110018: Got expected result from replica 1 [my-hostname-basic-f2c39eb9-9f50-11ea-b1d1-0242ac110018-27hxg]: "my-hostname-basic-f2c39eb9-9f50-11ea-b1d1-0242ac110018-27hxg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:01:20.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-j9spj" for this suite. May 26 13:01:26.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:01:26.880: INFO: namespace: e2e-tests-replication-controller-j9spj, resource: bindings, ignored listing per whitelist May 26 13:01:26.901: INFO: namespace e2e-tests-replication-controller-j9spj deletion completed in 6.104326971s • [SLOW TEST:26.643 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:01:26.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 13:01:27.240: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 12.827888ms) May 26 13:01:27.243: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.847567ms) May 26 13:01:27.246: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.705989ms) May 26 13:01:27.248: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.123451ms) May 26 13:01:27.250: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.423588ms) May 26 13:01:27.253: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.353109ms) May 26 13:01:27.255: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.090432ms) May 26 13:01:27.258: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.644617ms) May 26 13:01:27.260: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.679065ms) May 26 13:01:27.263: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.953648ms) May 26 13:01:27.266: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.837554ms) May 26 13:01:27.295: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 28.462968ms) May 26 13:01:27.298: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.48533ms) May 26 13:01:27.301: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.398325ms) May 26 13:01:27.305: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.035319ms) May 26 13:01:27.308: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.196144ms) May 26 13:01:27.311: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.504127ms) May 26 13:01:27.315: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.355401ms) May 26 13:01:27.318: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.341318ms) May 26 13:01:27.321: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.347077ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:01:27.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-sjx6c" for this suite. May 26 13:01:33.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:01:33.419: INFO: namespace: e2e-tests-proxy-sjx6c, resource: bindings, ignored listing per whitelist May 26 13:01:33.429: INFO: namespace e2e-tests-proxy-sjx6c deletion completed in 6.105363116s • [SLOW TEST:6.528 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:01:33.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-0655e637-9f51-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 13:01:33.672: INFO: Waiting up to 5m0s for pod "pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018" in namespace "e2e-tests-configmap-944fj" to be "success or failure" May 26 13:01:33.682: INFO: Pod "pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.710359ms May 26 13:01:35.684: INFO: Pod "pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012154575s May 26 13:01:37.687: INFO: Pod "pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015006762s May 26 13:01:39.690: INFO: Pod "pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018152252s May 26 13:01:41.696: INFO: Pod "pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02381152s May 26 13:01:43.714: INFO: Pod "pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042094518s May 26 13:01:45.717: INFO: Pod "pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.045217849s May 26 13:01:47.881: INFO: Pod "pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.208940975s STEP: Saw pod success May 26 13:01:47.881: INFO: Pod "pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:01:47.884: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018 container configmap-volume-test: STEP: delete the pod May 26 13:01:47.930: INFO: Waiting for pod pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018 to disappear May 26 13:01:48.012: INFO: Pod pod-configmaps-0662021e-9f51-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:01:48.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-944fj" for this suite. May 26 13:01:54.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:01:54.056: INFO: namespace: e2e-tests-configmap-944fj, resource: bindings, ignored listing per whitelist May 26 13:01:54.086: INFO: namespace e2e-tests-configmap-944fj deletion completed in 6.070810206s • [SLOW TEST:20.657 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:01:54.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 26 13:01:54.201: INFO: Waiting up to 5m0s for pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018" in namespace "e2e-tests-containers-mls58" to be "success or failure" May 26 13:01:54.205: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111726ms May 26 13:01:56.209: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007678227s May 26 13:01:58.212: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010970921s May 26 13:02:00.215: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014060918s May 26 13:02:02.219: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017368547s May 26 13:02:04.222: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020966941s May 26 13:02:06.226: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024644705s May 26 13:02:08.229: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.02800424s May 26 13:02:10.232: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.031105093s May 26 13:02:12.236: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 18.0344s May 26 13:02:14.239: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.037563125s STEP: Saw pod success May 26 13:02:14.239: INFO: Pod "client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:02:14.241: INFO: Trying to get logs from node hunter-worker pod client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 13:02:14.356: INFO: Waiting for pod client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018 to disappear May 26 13:02:14.371: INFO: Pod client-containers-129d5f55-9f51-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:02:14.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-mls58" for this suite. May 26 13:02:20.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:02:20.531: INFO: namespace: e2e-tests-containers-mls58, resource: bindings, ignored listing per whitelist May 26 13:02:20.572: INFO: namespace e2e-tests-containers-mls58 deletion completed in 6.196870056s • [SLOW TEST:26.485 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:02:20.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 26 13:02:20.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zj8gs' May 26 13:02:20.911: INFO: stderr: "" May 26 13:02:20.911: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 26 13:02:21.914: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:21.914: INFO: Found 0 / 1 May 26 13:02:22.915: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:22.915: INFO: Found 0 / 1 May 26 13:02:23.923: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:23.923: INFO: Found 0 / 1 May 26 13:02:24.916: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:24.916: INFO: Found 0 / 1 May 26 13:02:25.915: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:25.915: INFO: Found 0 / 1 May 26 13:02:26.915: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:26.915: INFO: Found 0 / 1 May 26 13:02:27.915: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:27.915: INFO: Found 0 / 1 May 26 13:02:28.914: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:28.914: INFO: Found 0 / 1 May 26 13:02:29.930: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:29.930: INFO: Found 0 / 1 May 26 13:02:30.915: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:30.915: INFO: Found 0 / 1 May 26 13:02:31.914: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:31.914: INFO: Found 0 / 1 May 26 13:02:32.914: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:32.914: INFO: Found 0 / 1 May 26 13:02:33.931: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:33.931: INFO: Found 0 / 1 May 26 13:02:34.915: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:34.915: INFO: Found 1 / 1 May 26 13:02:34.915: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 26 13:02:34.918: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:34.918: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 26 13:02:34.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-jh5n4 --namespace=e2e-tests-kubectl-zj8gs -p {"metadata":{"annotations":{"x":"y"}}}' May 26 13:02:35.008: INFO: stderr: "" May 26 13:02:35.008: INFO: stdout: "pod/redis-master-jh5n4 patched\n" STEP: checking annotations May 26 13:02:35.012: INFO: Selector matched 1 pods for map[app:redis] May 26 13:02:35.012: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:02:35.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zj8gs" for this suite. May 26 13:02:57.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:02:57.032: INFO: namespace: e2e-tests-kubectl-zj8gs, resource: bindings, ignored listing per whitelist May 26 13:02:57.084: INFO: namespace e2e-tests-kubectl-zj8gs deletion completed in 22.069954895s • [SLOW TEST:36.512 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:02:57.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-38282b98-9f51-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 13:02:57.206: INFO: Waiting up to 5m0s for pod "pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018" in namespace "e2e-tests-secrets-7g4qr" to be "success or failure" May 26 13:02:57.219: INFO: Pod "pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.441543ms May 26 13:02:59.222: INFO: Pod "pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015251527s May 26 13:03:01.224: INFO: Pod "pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017962754s May 26 13:03:03.228: INFO: Pod "pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021236433s May 26 13:03:05.231: INFO: Pod "pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024568171s May 26 13:03:07.234: INFO: Pod "pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027584912s May 26 13:03:09.238: INFO: Pod "pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.031485389s May 26 13:03:11.284: INFO: Pod "pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.077250749s STEP: Saw pod success May 26 13:03:11.284: INFO: Pod "pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:03:11.310: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018 container secret-volume-test: STEP: delete the pod May 26 13:03:11.357: INFO: Waiting for pod pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018 to disappear May 26 13:03:11.372: INFO: Pod pod-secrets-3828d62a-9f51-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:03:11.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7g4qr" for this suite. May 26 13:03:17.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:03:17.528: INFO: namespace: e2e-tests-secrets-7g4qr, resource: bindings, ignored listing per whitelist May 26 13:03:17.571: INFO: namespace e2e-tests-secrets-7g4qr deletion completed in 6.196712018s • [SLOW TEST:20.487 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:03:17.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-djh7q May 26 13:03:32.070: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-djh7q STEP: checking the pod's current state and verifying that restartCount is present May 26 13:03:32.072: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:07:33.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-djh7q" for this suite. May 26 13:07:39.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:07:39.990: INFO: namespace: e2e-tests-container-probe-djh7q, resource: bindings, ignored listing per whitelist May 26 13:07:40.011: INFO: namespace e2e-tests-container-probe-djh7q deletion completed in 6.065029275s • [SLOW TEST:262.440 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:07:40.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 26 13:07:40.150: INFO: Waiting up to 5m0s for pod "pod-e0d3c220-9f51-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-fl96w" to be "success or failure" May 26 13:07:40.178: INFO: Pod "pod-e0d3c220-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.367014ms May 26 13:07:42.182: INFO: Pod "pod-e0d3c220-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032029991s May 26 13:07:44.185: INFO: Pod "pod-e0d3c220-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035431433s May 26 13:07:46.189: INFO: Pod "pod-e0d3c220-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039202667s May 26 13:07:48.389: INFO: Pod "pod-e0d3c220-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.239267956s May 26 13:07:50.392: INFO: Pod "pod-e0d3c220-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.242690783s May 26 13:07:52.395: INFO: Pod "pod-e0d3c220-9f51-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.245263776s May 26 13:07:54.557: INFO: Pod "pod-e0d3c220-9f51-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.407204692s STEP: Saw pod success May 26 13:07:54.557: INFO: Pod "pod-e0d3c220-9f51-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:07:54.560: INFO: Trying to get logs from node hunter-worker2 pod pod-e0d3c220-9f51-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 13:07:54.582: INFO: Waiting for pod pod-e0d3c220-9f51-11ea-b1d1-0242ac110018 to disappear May 26 13:07:54.585: INFO: Pod pod-e0d3c220-9f51-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:07:54.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fl96w" for this suite. May 26 13:08:01.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:08:01.323: INFO: namespace: e2e-tests-emptydir-fl96w, resource: bindings, ignored listing per whitelist May 26 13:08:01.331: INFO: namespace e2e-tests-emptydir-fl96w deletion completed in 6.743451673s • [SLOW TEST:21.320 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:08:01.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-edaf505d-9f51-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 13:08:01.888: INFO: Waiting up to 5m0s for pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018" in namespace "e2e-tests-secrets-4jv9l" to be "success or failure" May 26 13:08:01.965: INFO: Pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 77.100605ms May 26 13:08:03.976: INFO: Pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087702332s May 26 13:08:05.978: INFO: Pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090304044s May 26 13:08:08.001: INFO: Pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113154429s May 26 13:08:10.004: INFO: Pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116308166s May 26 13:08:12.007: INFO: Pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119127091s May 26 13:08:14.030: INFO: Pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.142139715s May 26 13:08:16.042: INFO: Pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 14.153969529s May 26 13:08:18.045: INFO: Pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.157383709s STEP: Saw pod success May 26 13:08:18.045: INFO: Pod "pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:08:18.047: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018 container secret-volume-test: STEP: delete the pod May 26 13:08:18.157: INFO: Waiting for pod pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018 to disappear May 26 13:08:18.173: INFO: Pod pod-secrets-edbb3dd0-9f51-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:08:18.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4jv9l" for this suite. May 26 13:08:24.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:08:24.261: INFO: namespace: e2e-tests-secrets-4jv9l, resource: bindings, ignored listing per whitelist May 26 13:08:24.261: INFO: namespace e2e-tests-secrets-4jv9l deletion completed in 6.085660314s • [SLOW TEST:22.930 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:08:24.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-clp87 STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 13:08:24.355: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 26 13:09:12.536: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.169:8080/dial?request=hostName&protocol=http&host=10.244.1.251&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-clp87 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 13:09:12.536: INFO: >>> kubeConfig: /root/.kube/config I0526 13:09:12.569649 6 log.go:172] (0xc000857c30) (0xc000603220) Create stream I0526 13:09:12.569677 6 log.go:172] (0xc000857c30) (0xc000603220) Stream added, broadcasting: 1 I0526 13:09:12.571455 6 log.go:172] (0xc000857c30) Reply frame received for 1 I0526 13:09:12.571487 6 log.go:172] (0xc000857c30) (0xc00034bd60) Create stream I0526 13:09:12.571497 6 log.go:172] (0xc000857c30) (0xc00034bd60) Stream added, broadcasting: 3 I0526 13:09:12.572243 6 log.go:172] (0xc000857c30) Reply frame received for 3 I0526 13:09:12.572268 6 log.go:172] (0xc000857c30) (0xc0006032c0) Create stream I0526 13:09:12.572276 6 log.go:172] (0xc000857c30) (0xc0006032c0) Stream added, broadcasting: 5 I0526 13:09:12.573400 6 log.go:172] (0xc000857c30) Reply frame received for 5 I0526 13:09:13.012930 6 log.go:172] (0xc000857c30) Data frame received for 5 I0526 13:09:13.012976 6 log.go:172] (0xc0006032c0) (5) Data frame handling I0526 13:09:13.013007 6 log.go:172] (0xc000857c30) Data frame received for 3 I0526 13:09:13.013026 6 log.go:172] (0xc00034bd60) (3) Data frame handling I0526 13:09:13.013055 6 log.go:172] (0xc00034bd60) (3) Data frame sent I0526 13:09:13.013445 6 log.go:172] (0xc000857c30) Data frame received for 3 I0526 13:09:13.013460 6 log.go:172] (0xc00034bd60) (3) Data frame handling I0526 13:09:13.014142 6 log.go:172] (0xc000857c30) Data frame received for 1 I0526 13:09:13.014164 6 log.go:172] (0xc000603220) (1) Data frame handling I0526 13:09:13.014180 6 log.go:172] (0xc000603220) (1) Data frame sent I0526 13:09:13.014200 6 log.go:172] (0xc000857c30) (0xc000603220) Stream removed, broadcasting: 1 I0526 13:09:13.014234 6 log.go:172] (0xc000857c30) Go away received I0526 13:09:13.014289 6 log.go:172] (0xc000857c30) (0xc000603220) Stream removed, broadcasting: 1 I0526 13:09:13.014325 6 log.go:172] (0xc000857c30) (0xc00034bd60) Stream removed, broadcasting: 3 I0526 13:09:13.014346 6 log.go:172] (0xc000857c30) (0xc0006032c0) Stream removed, broadcasting: 5 May 26 13:09:13.014: INFO: Waiting for endpoints: map[] May 26 13:09:13.017: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.169:8080/dial?request=hostName&protocol=http&host=10.244.2.168&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-clp87 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 13:09:13.017: INFO: >>> kubeConfig: /root/.kube/config I0526 13:09:13.045895 6 log.go:172] (0xc000587810) (0xc001194820) Create stream I0526 13:09:13.045920 6 log.go:172] (0xc000587810) (0xc001194820) Stream added, broadcasting: 1 I0526 13:09:13.047619 6 log.go:172] (0xc000587810) Reply frame received for 1 I0526 13:09:13.047669 6 log.go:172] (0xc000587810) (0xc001720000) Create stream I0526 13:09:13.047688 6 log.go:172] (0xc000587810) (0xc001720000) Stream added, broadcasting: 3 I0526 13:09:13.048462 6 log.go:172] (0xc000587810) Reply frame received for 3 I0526 13:09:13.048487 6 log.go:172] (0xc000587810) (0xc0011948c0) Create stream I0526 13:09:13.048497 6 log.go:172] (0xc000587810) (0xc0011948c0) Stream added, broadcasting: 5 I0526 13:09:13.049103 6 log.go:172] (0xc000587810) Reply frame received for 5 I0526 13:09:13.473594 6 log.go:172] (0xc000587810) Data frame received for 5 I0526 13:09:13.473629 6 log.go:172] (0xc0011948c0) (5) Data frame handling I0526 13:09:13.473668 6 log.go:172] (0xc000587810) Data frame received for 3 I0526 13:09:13.473693 6 log.go:172] (0xc001720000) (3) Data frame handling I0526 13:09:13.473720 6 log.go:172] (0xc001720000) (3) Data frame sent I0526 13:09:13.473739 6 log.go:172] (0xc000587810) Data frame received for 3 I0526 13:09:13.473755 6 log.go:172] (0xc001720000) (3) Data frame handling I0526 13:09:13.474896 6 log.go:172] (0xc000587810) Data frame received for 1 I0526 13:09:13.474928 6 log.go:172] (0xc001194820) (1) Data frame handling I0526 13:09:13.474941 6 log.go:172] (0xc001194820) (1) Data frame sent I0526 13:09:13.474954 6 log.go:172] (0xc000587810) (0xc001194820) Stream removed, broadcasting: 1 I0526 13:09:13.474969 6 log.go:172] (0xc000587810) Go away received I0526 13:09:13.475103 6 log.go:172] (0xc000587810) (0xc001194820) Stream removed, broadcasting: 1 I0526 13:09:13.475122 6 log.go:172] (0xc000587810) (0xc001720000) Stream removed, broadcasting: 3 I0526 13:09:13.475133 6 log.go:172] (0xc000587810) (0xc0011948c0) Stream removed, broadcasting: 5 May 26 13:09:13.475: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:09:13.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-clp87" for this suite. May 26 13:09:37.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:09:37.544: INFO: namespace: e2e-tests-pod-network-test-clp87, resource: bindings, ignored listing per whitelist May 26 13:09:37.557: INFO: namespace e2e-tests-pod-network-test-clp87 deletion completed in 24.079436485s • [SLOW TEST:73.296 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:09:37.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 26 13:09:49.846: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-26f5a823-9f52-11ea-b1d1-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-tnd5t", SelfLink:"/api/v1/namespaces/e2e-tests-pods-tnd5t/pods/pod-submit-remove-26f5a823-9f52-11ea-b1d1-0242ac110018", UID:"26f8b5d7-9f52-11ea-99e8-0242ac110002", ResourceVersion:"12624797", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726095377, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"798972191"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ndhjv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002019c80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ndhjv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002024ea8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001884060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002024ef0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002024f10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002024f18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002024f1c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726095377, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726095389, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726095389, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726095377, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.252", StartTime:(*v1.Time)(0xc001b3bfe0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001b24020), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://1275b7f1b9fcdd187e444fbab478b2ef98edcb6bfe701ade100611b01b40cbe3"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:10:01.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-tnd5t" for this suite. May 26 13:10:07.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:10:07.318: INFO: namespace: e2e-tests-pods-tnd5t, resource: bindings, ignored listing per whitelist May 26 13:10:07.367: INFO: namespace e2e-tests-pods-tnd5t deletion completed in 6.074368326s • [SLOW TEST:29.810 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:10:07.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-z457b STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z457b to expose endpoints map[] May 26 13:10:07.518: INFO: Get endpoints failed (18.532378ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 26 13:10:08.522: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z457b exposes endpoints map[] (1.023117283s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-z457b STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z457b to expose endpoints map[pod1:[100]] May 26 13:10:12.612: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.085708145s elapsed, will retry) May 26 13:10:17.896: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.370155311s elapsed, will retry) May 26 13:10:20.935: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z457b exposes endpoints map[pod1:[100]] (12.408361924s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-z457b STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z457b to expose endpoints map[pod2:[101] pod1:[100]] May 26 13:10:25.076: INFO: Unexpected endpoints: found map[3946143a-9f52-11ea-99e8-0242ac110002:[100]], expected map[pod1:[100] pod2:[101]] (4.138079472s elapsed, will retry) May 26 13:10:30.259: INFO: Unexpected endpoints: found map[3946143a-9f52-11ea-99e8-0242ac110002:[100]], expected map[pod1:[100] pod2:[101]] (9.321293927s elapsed, will retry) May 26 13:10:34.291: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z457b exposes endpoints map[pod1:[100] pod2:[101]] (13.353422199s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-z457b STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z457b to expose endpoints map[pod2:[101]] May 26 13:10:35.483: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z457b exposes endpoints map[pod2:[101]] (1.187380064s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-z457b STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z457b to expose endpoints map[] May 26 13:10:36.781: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z457b exposes endpoints map[] (1.295286375s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:10:37.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-z457b" for this suite. May 26 13:10:59.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:10:59.362: INFO: namespace: e2e-tests-services-z457b, resource: bindings, ignored listing per whitelist May 26 13:10:59.366: INFO: namespace e2e-tests-services-z457b deletion completed in 22.229186381s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:51.999 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:10:59.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 26 13:10:59.651: INFO: Pod name pod-release: Found 0 pods out of 1 May 26 13:11:04.654: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:11:05.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-crm8f" for this suite. May 26 13:11:11.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:11:11.787: INFO: namespace: e2e-tests-replication-controller-crm8f, resource: bindings, ignored listing per whitelist May 26 13:11:11.813: INFO: namespace e2e-tests-replication-controller-crm8f deletion completed in 6.118690305s • [SLOW TEST:12.446 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:11:11.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 26 13:11:11.933: INFO: Waiting up to 5m0s for pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-vqs5j" to be "success or failure" May 26 13:11:11.945: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.046649ms May 26 13:11:13.948: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015392326s May 26 13:11:15.958: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024955656s May 26 13:11:17.961: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028121793s May 26 13:11:19.964: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031125289s May 26 13:11:21.968: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.034640364s May 26 13:11:23.971: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.037777924s May 26 13:11:25.974: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.04118898s May 26 13:11:27.978: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.044562315s May 26 13:11:29.980: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.047266959s May 26 13:11:31.984: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.050931371s May 26 13:11:33.987: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.053692174s STEP: Saw pod success May 26 13:11:33.987: INFO: Pod "pod-5f0955c5-9f52-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:11:33.988: INFO: Trying to get logs from node hunter-worker pod pod-5f0955c5-9f52-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 13:11:34.008: INFO: Waiting for pod pod-5f0955c5-9f52-11ea-b1d1-0242ac110018 to disappear May 26 13:11:34.053: INFO: Pod pod-5f0955c5-9f52-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:11:34.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vqs5j" for this suite. May 26 13:11:40.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:11:40.108: INFO: namespace: e2e-tests-emptydir-vqs5j, resource: bindings, ignored listing per whitelist May 26 13:11:40.144: INFO: namespace e2e-tests-emptydir-vqs5j deletion completed in 6.088034003s • [SLOW TEST:28.331 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:11:40.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 13:11:40.274: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 26 13:11:40.279: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xctf4/daemonsets","resourceVersion":"12625143"},"items":null} May 26 13:11:40.280: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xctf4/pods","resourceVersion":"12625143"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:11:40.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-xctf4" for this suite. May 26 13:11:46.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:11:46.377: INFO: namespace: e2e-tests-daemonsets-xctf4, resource: bindings, ignored listing per whitelist May 26 13:11:46.386: INFO: namespace e2e-tests-daemonsets-xctf4 deletion completed in 6.095951113s S [SKIPPING] [6.242 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 13:11:40.274: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:11:46.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 26 13:11:46.788: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-2xltm,SelfLink:/api/v1/namespaces/e2e-tests-watch-2xltm/configmaps/e2e-watch-test-resource-version,UID:73cf0c12-9f52-11ea-99e8-0242ac110002,ResourceVersion:12625168,Generation:0,CreationTimestamp:2020-05-26 13:11:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 26 13:11:46.789: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-2xltm,SelfLink:/api/v1/namespaces/e2e-tests-watch-2xltm/configmaps/e2e-watch-test-resource-version,UID:73cf0c12-9f52-11ea-99e8-0242ac110002,ResourceVersion:12625169,Generation:0,CreationTimestamp:2020-05-26 13:11:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:11:46.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-2xltm" for this suite. May 26 13:11:52.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:11:52.852: INFO: namespace: e2e-tests-watch-2xltm, resource: bindings, ignored listing per whitelist May 26 13:11:52.873: INFO: namespace e2e-tests-watch-2xltm deletion completed in 6.076768521s • [SLOW TEST:6.487 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:11:52.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-6sdck May 26 13:12:11.022: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-6sdck STEP: checking the pod's current state and verifying that restartCount is present May 26 13:12:11.024: INFO: Initial restart count of pod liveness-exec is 0 May 26 13:13:03.147: INFO: Restart count of pod e2e-tests-container-probe-6sdck/liveness-exec is now 1 (52.123518025s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:13:03.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-6sdck" for this suite. May 26 13:13:09.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:13:09.296: INFO: namespace: e2e-tests-container-probe-6sdck, resource: bindings, ignored listing per whitelist May 26 13:13:09.298: INFO: namespace e2e-tests-container-probe-6sdck deletion completed in 6.080161242s • [SLOW TEST:76.424 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:13:09.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:13:25.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tdfzr" for this suite. May 26 13:14:15.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:14:15.495: INFO: namespace: e2e-tests-kubelet-test-tdfzr, resource: bindings, ignored listing per whitelist May 26 13:14:15.550: INFO: namespace e2e-tests-kubelet-test-tdfzr deletion completed in 50.079174631s • [SLOW TEST:66.251 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:14:15.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 26 13:14:15.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-tsnq6 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 26 13:14:30.904: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0526 13:14:30.662485 3124 log.go:172] (0xc0002d0420) (0xc0005cabe0) Create stream\nI0526 13:14:30.662520 3124 log.go:172] (0xc0002d0420) (0xc0005cabe0) Stream added, broadcasting: 1\nI0526 13:14:30.664676 3124 log.go:172] (0xc0002d0420) Reply frame received for 1\nI0526 13:14:30.664717 3124 log.go:172] (0xc0002d0420) (0xc000754320) Create stream\nI0526 13:14:30.664730 3124 log.go:172] (0xc0002d0420) (0xc000754320) Stream added, broadcasting: 3\nI0526 13:14:30.665583 3124 log.go:172] (0xc0002d0420) Reply frame received for 3\nI0526 13:14:30.665624 3124 log.go:172] (0xc0002d0420) (0xc0005cac80) Create stream\nI0526 13:14:30.665636 3124 log.go:172] (0xc0002d0420) (0xc0005cac80) Stream added, broadcasting: 5\nI0526 13:14:30.666377 3124 log.go:172] (0xc0002d0420) Reply frame received for 5\nI0526 13:14:30.666418 3124 log.go:172] (0xc0002d0420) (0xc000674d20) Create stream\nI0526 13:14:30.666437 3124 log.go:172] (0xc0002d0420) (0xc000674d20) Stream added, broadcasting: 7\nI0526 13:14:30.667164 3124 log.go:172] (0xc0002d0420) Reply frame received for 7\nI0526 13:14:30.667328 3124 log.go:172] (0xc000754320) (3) Writing data frame\nI0526 13:14:30.667444 3124 log.go:172] (0xc000754320) (3) Writing data frame\nI0526 13:14:30.668179 3124 log.go:172] (0xc0002d0420) Data frame received for 5\nI0526 13:14:30.668204 3124 log.go:172] (0xc0005cac80) (5) Data frame handling\nI0526 13:14:30.668230 3124 log.go:172] (0xc0005cac80) (5) Data frame sent\nI0526 13:14:30.668674 3124 log.go:172] (0xc0002d0420) Data frame received for 5\nI0526 13:14:30.668683 3124 log.go:172] (0xc0005cac80) (5) Data frame handling\nI0526 13:14:30.668691 3124 log.go:172] (0xc0005cac80) (5) Data frame sent\nI0526 13:14:30.885351 3124 log.go:172] (0xc0002d0420) Data frame received for 5\nI0526 13:14:30.885386 3124 log.go:172] (0xc0002d0420) Data frame received for 7\nI0526 13:14:30.885401 3124 log.go:172] (0xc000674d20) (7) Data frame handling\nI0526 13:14:30.885421 3124 log.go:172] (0xc0005cac80) (5) Data frame handling\nI0526 13:14:30.885447 3124 log.go:172] (0xc0002d0420) Data frame received for 1\nI0526 13:14:30.885457 3124 log.go:172] (0xc0005cabe0) (1) Data frame handling\nI0526 13:14:30.885466 3124 log.go:172] (0xc0005cabe0) (1) Data frame sent\nI0526 13:14:30.885478 3124 log.go:172] (0xc0002d0420) (0xc0005cabe0) Stream removed, broadcasting: 1\nI0526 13:14:30.885528 3124 log.go:172] (0xc0002d0420) (0xc000754320) Stream removed, broadcasting: 3\nI0526 13:14:30.885573 3124 log.go:172] (0xc0002d0420) Go away received\nI0526 13:14:30.885635 3124 log.go:172] (0xc0002d0420) (0xc0005cabe0) Stream removed, broadcasting: 1\nI0526 13:14:30.885668 3124 log.go:172] (0xc0002d0420) (0xc000754320) Stream removed, broadcasting: 3\nI0526 13:14:30.885680 3124 log.go:172] (0xc0002d0420) (0xc0005cac80) Stream removed, broadcasting: 5\nI0526 13:14:30.885698 3124 log.go:172] (0xc0002d0420) (0xc000674d20) Stream removed, broadcasting: 7\n" May 26 13:14:30.905: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:14:32.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tsnq6" for this suite. May 26 13:14:42.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:14:43.026: INFO: namespace: e2e-tests-kubectl-tsnq6, resource: bindings, ignored listing per whitelist May 26 13:14:43.026: INFO: namespace e2e-tests-kubectl-tsnq6 deletion completed in 10.114336207s • [SLOW TEST:27.476 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:14:43.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:15:03.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-6r665" for this suite. May 26 13:15:55.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:15:55.278: INFO: namespace: e2e-tests-kubelet-test-6r665, resource: bindings, ignored listing per whitelist May 26 13:15:55.286: INFO: namespace e2e-tests-kubelet-test-6r665 deletion completed in 52.077928859s • [SLOW TEST:72.260 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:15:55.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-9frp9 May 26 13:16:48.182: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-9frp9 STEP: checking the pod's current state and verifying that restartCount is present May 26 13:16:48.183: INFO: Initial restart count of pod liveness-http is 0 May 26 13:17:10.222: INFO: Restart count of pod e2e-tests-container-probe-9frp9/liveness-http is now 1 (22.039002709s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:17:10.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-9frp9" for this suite. May 26 13:17:16.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:17:16.292: INFO: namespace: e2e-tests-container-probe-9frp9, resource: bindings, ignored listing per whitelist May 26 13:17:16.339: INFO: namespace e2e-tests-container-probe-9frp9 deletion completed in 6.071158444s • [SLOW TEST:81.053 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:17:16.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 26 13:17:16.451: INFO: Waiting up to 5m0s for pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-wh5n9" to be "success or failure" May 26 13:17:16.475: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.401734ms May 26 13:17:18.479: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027330411s May 26 13:17:20.482: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030768235s May 26 13:17:22.486: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034669032s May 26 13:17:24.490: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038345181s May 26 13:17:26.493: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.041672125s May 26 13:17:28.496: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.044813345s May 26 13:17:30.500: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.048931826s May 26 13:17:32.556: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 16.104641406s May 26 13:17:34.559: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.108026906s STEP: Saw pod success May 26 13:17:34.559: INFO: Pod "downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:17:34.561: INFO: Trying to get logs from node hunter-worker2 pod downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018 container dapi-container: STEP: delete the pod May 26 13:17:34.601: INFO: Waiting for pod downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018 to disappear May 26 13:17:34.643: INFO: Pod downward-api-38556ebd-9f53-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:17:34.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wh5n9" for this suite. May 26 13:17:40.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:17:40.666: INFO: namespace: e2e-tests-downward-api-wh5n9, resource: bindings, ignored listing per whitelist May 26 13:17:40.726: INFO: namespace e2e-tests-downward-api-wh5n9 deletion completed in 6.080089414s • [SLOW TEST:24.387 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:17:40.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-46dec93f-9f53-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 13:17:40.855: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-8556c" to be "success or failure" May 26 13:17:40.860: INFO: Pod "pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187638ms May 26 13:17:42.919: INFO: Pod "pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06393814s May 26 13:17:44.922: INFO: Pod "pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066936516s May 26 13:17:46.926: INFO: Pod "pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070348242s May 26 13:17:48.945: INFO: Pod "pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090017821s May 26 13:17:50.949: INFO: Pod "pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093666455s May 26 13:17:52.952: INFO: Pod "pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.097028722s May 26 13:17:54.956: INFO: Pod "pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.100286217s STEP: Saw pod success May 26 13:17:54.956: INFO: Pod "pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:17:54.958: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 26 13:17:54.979: INFO: Waiting for pod pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018 to disappear May 26 13:17:54.983: INFO: Pod pod-projected-configmaps-46e11bb2-9f53-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:17:54.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8556c" for this suite. May 26 13:18:01.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:18:01.056: INFO: namespace: e2e-tests-projected-8556c, resource: bindings, ignored listing per whitelist May 26 13:18:01.085: INFO: namespace e2e-tests-projected-8556c deletion completed in 6.099145118s • [SLOW TEST:20.359 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:18:01.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-h2xp8 May 26 13:18:15.210: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-h2xp8 STEP: checking the pod's current state and verifying that restartCount is present May 26 13:18:15.213: INFO: Initial restart count of pod liveness-http is 0 May 26 13:18:39.251: INFO: Restart count of pod e2e-tests-container-probe-h2xp8/liveness-http is now 1 (24.037983612s elapsed) May 26 13:18:59.281: INFO: Restart count of pod e2e-tests-container-probe-h2xp8/liveness-http is now 2 (44.067499588s elapsed) May 26 13:19:19.076: INFO: Restart count of pod e2e-tests-container-probe-h2xp8/liveness-http is now 3 (1m3.86334355s elapsed) May 26 13:19:39.174: INFO: Restart count of pod e2e-tests-container-probe-h2xp8/liveness-http is now 4 (1m23.961375956s elapsed) May 26 13:20:39.340: INFO: Restart count of pod e2e-tests-container-probe-h2xp8/liveness-http is now 5 (2m24.1267317s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:20:39.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-h2xp8" for this suite. May 26 13:20:45.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:20:45.430: INFO: namespace: e2e-tests-container-probe-h2xp8, resource: bindings, ignored listing per whitelist May 26 13:20:45.432: INFO: namespace e2e-tests-container-probe-h2xp8 deletion completed in 6.064334321s • [SLOW TEST:164.346 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:20:45.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 26 13:21:13.628: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 13:21:13.638: INFO: Pod pod-with-poststart-http-hook still exists May 26 13:21:15.638: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 13:21:15.642: INFO: Pod pod-with-poststart-http-hook still exists May 26 13:21:17.638: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 13:21:17.642: INFO: Pod pod-with-poststart-http-hook still exists May 26 13:21:19.638: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 13:21:19.642: INFO: Pod pod-with-poststart-http-hook still exists May 26 13:21:21.638: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 13:21:21.642: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:21:21.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jfgk6" for this suite. May 26 13:21:43.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:21:43.724: INFO: namespace: e2e-tests-container-lifecycle-hook-jfgk6, resource: bindings, ignored listing per whitelist May 26 13:21:43.736: INFO: namespace e2e-tests-container-lifecycle-hook-jfgk6 deletion completed in 22.089516582s • [SLOW TEST:58.304 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:21:43.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 26 13:21:58.412: INFO: Successfully updated pod "labelsupdated7b8d766-9f53-11ea-b1d1-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:22:00.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4769v" for this suite. May 26 13:22:22.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:22:22.484: INFO: namespace: e2e-tests-downward-api-4769v, resource: bindings, ignored listing per whitelist May 26 13:22:22.544: INFO: namespace e2e-tests-downward-api-4769v deletion completed in 22.09227379s • [SLOW TEST:38.807 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:22:22.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 26 13:22:22.643: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-29qpz,SelfLink:/api/v1/namespaces/e2e-tests-watch-29qpz/configmaps/e2e-watch-test-label-changed,UID:eed4e943-9f53-11ea-99e8-0242ac110002,ResourceVersion:12626698,Generation:0,CreationTimestamp:2020-05-26 13:22:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 26 13:22:22.644: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-29qpz,SelfLink:/api/v1/namespaces/e2e-tests-watch-29qpz/configmaps/e2e-watch-test-label-changed,UID:eed4e943-9f53-11ea-99e8-0242ac110002,ResourceVersion:12626699,Generation:0,CreationTimestamp:2020-05-26 13:22:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 26 13:22:22.644: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-29qpz,SelfLink:/api/v1/namespaces/e2e-tests-watch-29qpz/configmaps/e2e-watch-test-label-changed,UID:eed4e943-9f53-11ea-99e8-0242ac110002,ResourceVersion:12626700,Generation:0,CreationTimestamp:2020-05-26 13:22:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 26 13:22:32.671: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-29qpz,SelfLink:/api/v1/namespaces/e2e-tests-watch-29qpz/configmaps/e2e-watch-test-label-changed,UID:eed4e943-9f53-11ea-99e8-0242ac110002,ResourceVersion:12626721,Generation:0,CreationTimestamp:2020-05-26 13:22:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 26 13:22:32.671: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-29qpz,SelfLink:/api/v1/namespaces/e2e-tests-watch-29qpz/configmaps/e2e-watch-test-label-changed,UID:eed4e943-9f53-11ea-99e8-0242ac110002,ResourceVersion:12626722,Generation:0,CreationTimestamp:2020-05-26 13:22:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 26 13:22:32.671: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-29qpz,SelfLink:/api/v1/namespaces/e2e-tests-watch-29qpz/configmaps/e2e-watch-test-label-changed,UID:eed4e943-9f53-11ea-99e8-0242ac110002,ResourceVersion:12626723,Generation:0,CreationTimestamp:2020-05-26 13:22:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:22:32.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-29qpz" for this suite. May 26 13:22:38.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:22:38.717: INFO: namespace: e2e-tests-watch-29qpz, resource: bindings, ignored listing per whitelist May 26 13:22:38.742: INFO: namespace e2e-tests-watch-29qpz deletion completed in 6.065895869s • [SLOW TEST:16.198 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:22:38.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-f87a4afc-9f53-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 13:22:38.846: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-vkf6h" to be "success or failure" May 26 13:22:38.850: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137769ms May 26 13:22:40.853: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007186748s May 26 13:22:42.856: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01034591s May 26 13:22:44.859: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013434797s May 26 13:22:46.862: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016723827s May 26 13:22:48.866: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020403063s May 26 13:22:50.912: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.065908843s May 26 13:22:52.915: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.069584678s May 26 13:22:54.918: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 16.072605227s May 26 13:22:56.921: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.075449641s STEP: Saw pod success May 26 13:22:56.921: INFO: Pod "pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:22:56.923: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 26 13:22:56.937: INFO: Waiting for pod pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018 to disappear May 26 13:22:56.942: INFO: Pod pod-projected-secrets-f87dbc99-9f53-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:22:56.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vkf6h" for this suite. May 26 13:23:02.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:23:03.008: INFO: namespace: e2e-tests-projected-vkf6h, resource: bindings, ignored listing per whitelist May 26 13:23:03.040: INFO: namespace e2e-tests-projected-vkf6h deletion completed in 6.095040083s • [SLOW TEST:24.297 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:23:03.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 26 13:23:03.151: INFO: PodSpec: initContainers in spec.initContainers May 26 13:24:10.726: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-06fcc3a7-9f54-11ea-b1d1-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-fkdgs", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-fkdgs/pods/pod-init-06fcc3a7-9f54-11ea-b1d1-0242ac110018", UID:"06fd426d-9f54-11ea-99e8-0242ac110002", ResourceVersion:"12626975", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726096183, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"151919225"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7mhvt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024be680), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7mhvt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7mhvt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7mhvt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00265e728), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023b3aa0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00265e7b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00265e7d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00265e7d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00265e7dc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726096183, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726096183, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726096183, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726096183, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.175", StartTime:(*v1.Time)(0xc0010f8ce0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0010f8d20), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025825b0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://687dc7d28a698b593992a02b87f436b6e5336e39a40162407d655bc7f540b1c6"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0010f8d40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0010f8d00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:24:10.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-fkdgs" for this suite. May 26 13:24:33.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:24:33.160: INFO: namespace: e2e-tests-init-container-fkdgs, resource: bindings, ignored listing per whitelist May 26 13:24:33.163: INFO: namespace e2e-tests-init-container-fkdgs deletion completed in 22.386365054s • [SLOW TEST:90.122 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:24:33.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-3cb35b85-9f54-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 13:24:33.295: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-m7nlt" to be "success or failure" May 26 13:24:33.321: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.577136ms May 26 13:24:35.325: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029702281s May 26 13:24:37.328: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032878013s May 26 13:24:39.331: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035567112s May 26 13:24:41.334: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038858352s May 26 13:24:43.339: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.043094725s May 26 13:24:45.375: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.079836417s May 26 13:24:48.219: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.923256199s May 26 13:24:50.222: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 16.926296431s May 26 13:24:52.399: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.103282225s STEP: Saw pod success May 26 13:24:52.399: INFO: Pod "pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:24:52.672: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 26 13:24:53.137: INFO: Waiting for pod pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018 to disappear May 26 13:24:53.243: INFO: Pod pod-projected-secrets-3cb3ff1a-9f54-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:24:53.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m7nlt" for this suite. May 26 13:24:59.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:24:59.294: INFO: namespace: e2e-tests-projected-m7nlt, resource: bindings, ignored listing per whitelist May 26 13:24:59.341: INFO: namespace e2e-tests-projected-m7nlt deletion completed in 6.093576464s • [SLOW TEST:26.178 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:24:59.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-4c4f7908-9f54-11ea-b1d1-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-4c4f7937-9f54-11ea-b1d1-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4c4f7908-9f54-11ea-b1d1-0242ac110018 STEP: Updating configmap cm-test-opt-upd-4c4f7937-9f54-11ea-b1d1-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-4c4f794d-9f54-11ea-b1d1-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:26:25.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bdjst" for this suite. May 26 13:26:47.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:26:47.950: INFO: namespace: e2e-tests-projected-bdjst, resource: bindings, ignored listing per whitelist May 26 13:26:47.984: INFO: namespace e2e-tests-projected-bdjst deletion completed in 22.168566787s • [SLOW TEST:108.643 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:26:47.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 26 13:27:01.174: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:27:02.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-rv927" for this suite. May 26 13:27:24.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:27:24.290: INFO: namespace: e2e-tests-replicaset-rv927, resource: bindings, ignored listing per whitelist May 26 13:27:24.303: INFO: namespace e2e-tests-replicaset-rv927 deletion completed in 22.112590524s • [SLOW TEST:36.319 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:27:24.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 26 13:27:39.048: INFO: Successfully updated pod "annotationupdatea2b675b8-9f54-11ea-b1d1-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:27:41.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kf47f" for this suite. May 26 13:28:03.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:28:03.135: INFO: namespace: e2e-tests-projected-kf47f, resource: bindings, ignored listing per whitelist May 26 13:28:03.137: INFO: namespace e2e-tests-projected-kf47f deletion completed in 22.06819732s • [SLOW TEST:38.834 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:28:03.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 26 13:28:03.369: INFO: Waiting up to 5m0s for pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-pthzj" to be "success or failure" May 26 13:28:03.444: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 75.320304ms May 26 13:28:05.448: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078796502s May 26 13:28:07.451: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08222579s May 26 13:28:09.455: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085433701s May 26 13:28:11.491: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122313799s May 26 13:28:13.494: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125298177s May 26 13:28:15.498: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.128679106s May 26 13:28:17.502: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.132418085s May 26 13:28:19.505: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.136039044s May 26 13:28:21.508: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.139251438s May 26 13:28:23.512: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 20.142527371s May 26 13:28:25.515: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.145859753s STEP: Saw pod success May 26 13:28:25.515: INFO: Pod "pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:28:25.518: INFO: Trying to get logs from node hunter-worker2 pod pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 13:28:25.536: INFO: Waiting for pod pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018 to disappear May 26 13:28:25.541: INFO: Pod pod-b9eb9cd6-9f54-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:28:25.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pthzj" for this suite. May 26 13:28:31.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:28:31.590: INFO: namespace: e2e-tests-emptydir-pthzj, resource: bindings, ignored listing per whitelist May 26 13:28:31.618: INFO: namespace e2e-tests-emptydir-pthzj deletion completed in 6.074371244s • [SLOW TEST:28.481 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:28:31.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 26 13:28:31.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 26 13:28:35.398: INFO: stderr: "" May 26 13:28:35.398: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:28:35.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xfwlz" for this suite. May 26 13:28:41.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:28:41.463: INFO: namespace: e2e-tests-kubectl-xfwlz, resource: bindings, ignored listing per whitelist May 26 13:28:41.489: INFO: namespace e2e-tests-kubectl-xfwlz deletion completed in 6.087019283s • [SLOW TEST:9.870 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:28:41.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 26 13:28:41.646: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kbf2w,SelfLink:/api/v1/namespaces/e2e-tests-watch-kbf2w/configmaps/e2e-watch-test-watch-closed,UID:d0b3ce8d-9f54-11ea-99e8-0242ac110002,ResourceVersion:12627677,Generation:0,CreationTimestamp:2020-05-26 13:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 26 13:28:41.646: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kbf2w,SelfLink:/api/v1/namespaces/e2e-tests-watch-kbf2w/configmaps/e2e-watch-test-watch-closed,UID:d0b3ce8d-9f54-11ea-99e8-0242ac110002,ResourceVersion:12627678,Generation:0,CreationTimestamp:2020-05-26 13:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 26 13:28:41.663: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kbf2w,SelfLink:/api/v1/namespaces/e2e-tests-watch-kbf2w/configmaps/e2e-watch-test-watch-closed,UID:d0b3ce8d-9f54-11ea-99e8-0242ac110002,ResourceVersion:12627679,Generation:0,CreationTimestamp:2020-05-26 13:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 26 13:28:41.663: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kbf2w,SelfLink:/api/v1/namespaces/e2e-tests-watch-kbf2w/configmaps/e2e-watch-test-watch-closed,UID:d0b3ce8d-9f54-11ea-99e8-0242ac110002,ResourceVersion:12627680,Generation:0,CreationTimestamp:2020-05-26 13:28:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:28:41.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-kbf2w" for this suite. May 26 13:28:47.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:28:47.715: INFO: namespace: e2e-tests-watch-kbf2w, resource: bindings, ignored listing per whitelist May 26 13:28:47.734: INFO: namespace e2e-tests-watch-kbf2w deletion completed in 6.065688542s • [SLOW TEST:6.245 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:28:47.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:31:37.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-jt9lp" for this suite. May 26 13:31:44.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:31:44.132: INFO: namespace: e2e-tests-container-runtime-jt9lp, resource: bindings, ignored listing per whitelist May 26 13:31:44.134: INFO: namespace e2e-tests-container-runtime-jt9lp deletion completed in 6.131825367s • [SLOW TEST:176.400 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:31:44.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 26 13:31:44.267: INFO: Waiting up to 5m0s for pod "client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018" in namespace "e2e-tests-containers-n9jl7" to be "success or failure" May 26 13:31:44.270: INFO: Pod "client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.885624ms May 26 13:31:46.279: INFO: Pod "client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011173346s May 26 13:31:48.282: INFO: Pod "client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014905031s May 26 13:31:50.285: INFO: Pod "client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017833941s May 26 13:31:52.288: INFO: Pod "client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02089932s May 26 13:31:54.292: INFO: Pod "client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024205854s May 26 13:31:56.295: INFO: Pod "client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027446632s May 26 13:31:58.298: INFO: Pod "client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.030293007s STEP: Saw pod success May 26 13:31:58.298: INFO: Pod "client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:31:58.300: INFO: Trying to get logs from node hunter-worker2 pod client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 13:31:58.322: INFO: Waiting for pod client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018 to disappear May 26 13:31:58.381: INFO: Pod client-containers-3d97f9fc-9f55-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:31:58.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-n9jl7" for this suite. May 26 13:32:04.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:32:04.405: INFO: namespace: e2e-tests-containers-n9jl7, resource: bindings, ignored listing per whitelist May 26 13:32:04.451: INFO: namespace e2e-tests-containers-n9jl7 deletion completed in 6.067288433s • [SLOW TEST:20.316 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:32:04.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 26 13:32:05.723: INFO: Waiting up to 5m0s for pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5" in namespace "e2e-tests-svcaccounts-gxbtq" to be "success or failure" May 26 13:32:05.745: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.064843ms May 26 13:32:07.749: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025193114s May 26 13:32:09.752: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028700985s May 26 13:32:11.755: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031295377s May 26 13:32:13.758: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034892282s May 26 13:32:15.761: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.037652131s May 26 13:32:17.801: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077600908s May 26 13:32:19.804: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.080686537s May 26 13:32:21.807: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.083696351s May 26 13:32:23.810: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.087160682s May 26 13:32:25.843: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.119191653s May 26 13:32:27.845: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.121658413s May 26 13:32:29.848: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.124371305s STEP: Saw pod success May 26 13:32:29.848: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5" satisfied condition "success or failure" May 26 13:32:29.850: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5 container token-test: STEP: delete the pod May 26 13:32:29.888: INFO: Waiting for pod pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5 to disappear May 26 13:32:29.920: INFO: Pod pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-qnjj5 no longer exists STEP: Creating a pod to test consume service account root CA May 26 13:32:29.922: INFO: Waiting up to 5m0s for pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg" in namespace "e2e-tests-svcaccounts-gxbtq" to be "success or failure" May 26 13:32:29.944: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 21.801658ms May 26 13:32:31.987: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064076951s May 26 13:32:33.990: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067479671s May 26 13:32:35.993: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070624785s May 26 13:32:37.996: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074000303s May 26 13:32:39.999: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076885604s May 26 13:32:42.004: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.081403115s May 26 13:32:44.008: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.085416533s May 26 13:32:46.011: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 16.08897534s May 26 13:32:48.014: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 18.091332165s May 26 13:32:50.017: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Pending", Reason="", readiness=false. Elapsed: 20.094744792s May 26 13:32:52.020: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.097725875s STEP: Saw pod success May 26 13:32:52.020: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg" satisfied condition "success or failure" May 26 13:32:52.022: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg container root-ca-test: STEP: delete the pod May 26 13:32:52.055: INFO: Waiting for pod pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg to disappear May 26 13:32:52.067: INFO: Pod pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-4xswg no longer exists STEP: Creating a pod to test consume service account namespace May 26 13:32:52.075: INFO: Waiting up to 5m0s for pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv" in namespace "e2e-tests-svcaccounts-gxbtq" to be "success or failure" May 26 13:32:52.107: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Pending", Reason="", readiness=false. Elapsed: 31.966327ms May 26 13:32:54.110: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035548347s May 26 13:32:56.114: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038768346s May 26 13:32:58.116: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040924567s May 26 13:33:00.119: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044480847s May 26 13:33:02.122: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.047453782s May 26 13:33:04.124: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.049750426s May 26 13:33:06.128: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.053479413s May 26 13:33:08.132: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.05697938s May 26 13:33:10.136: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Pending", Reason="", readiness=false. Elapsed: 18.060901662s May 26 13:33:12.139: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.064172491s STEP: Saw pod success May 26 13:33:12.139: INFO: Pod "pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv" satisfied condition "success or failure" May 26 13:33:12.141: INFO: Trying to get logs from node hunter-worker pod pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv container namespace-test: STEP: delete the pod May 26 13:33:12.161: INFO: Waiting for pod pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv to disappear May 26 13:33:12.191: INFO: Pod pod-service-account-4a61fa51-9f55-11ea-b1d1-0242ac110018-jmpbv no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:33:12.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-gxbtq" for this suite. May 26 13:33:18.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:33:18.293: INFO: namespace: e2e-tests-svcaccounts-gxbtq, resource: bindings, ignored listing per whitelist May 26 13:33:18.365: INFO: namespace e2e-tests-svcaccounts-gxbtq deletion completed in 6.149619265s • [SLOW TEST:73.914 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:33:18.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-pqjw STEP: Creating a pod to test atomic-volume-subpath May 26 13:33:18.574: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pqjw" in namespace "e2e-tests-subpath-v9jvr" to be "success or failure" May 26 13:33:18.580: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083418ms May 26 13:33:20.583: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009244704s May 26 13:33:22.586: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011560206s May 26 13:33:24.589: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015281551s May 26 13:33:26.620: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045745871s May 26 13:33:28.623: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.048536638s May 26 13:33:30.626: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.051416455s May 26 13:33:32.629: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.055164851s May 26 13:33:34.633: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.05871092s May 26 13:33:36.636: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 18.062075901s May 26 13:33:38.640: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 20.06549699s May 26 13:33:40.643: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Pending", Reason="", readiness=false. Elapsed: 22.069184861s May 26 13:33:42.646: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Running", Reason="", readiness=false. Elapsed: 24.072109791s May 26 13:33:44.650: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Running", Reason="", readiness=false. Elapsed: 26.075979706s May 26 13:33:46.654: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Running", Reason="", readiness=false. Elapsed: 28.079885226s May 26 13:33:48.657: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Running", Reason="", readiness=false. Elapsed: 30.08310106s May 26 13:33:50.661: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Running", Reason="", readiness=false. Elapsed: 32.086407348s May 26 13:33:52.664: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Running", Reason="", readiness=false. Elapsed: 34.090299195s May 26 13:33:54.667: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Running", Reason="", readiness=false. Elapsed: 36.093294903s May 26 13:33:56.679: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Running", Reason="", readiness=false. Elapsed: 38.104614642s May 26 13:33:58.683: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Running", Reason="", readiness=false. Elapsed: 40.108424864s May 26 13:34:00.686: INFO: Pod "pod-subpath-test-projected-pqjw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.111451602s STEP: Saw pod success May 26 13:34:00.686: INFO: Pod "pod-subpath-test-projected-pqjw" satisfied condition "success or failure" May 26 13:34:00.688: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-pqjw container test-container-subpath-projected-pqjw: STEP: delete the pod May 26 13:34:00.721: INFO: Waiting for pod pod-subpath-test-projected-pqjw to disappear May 26 13:34:00.734: INFO: Pod pod-subpath-test-projected-pqjw no longer exists STEP: Deleting pod pod-subpath-test-projected-pqjw May 26 13:34:00.734: INFO: Deleting pod "pod-subpath-test-projected-pqjw" in namespace "e2e-tests-subpath-v9jvr" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:34:00.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-v9jvr" for this suite. May 26 13:34:06.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:34:06.777: INFO: namespace: e2e-tests-subpath-v9jvr, resource: bindings, ignored listing per whitelist May 26 13:34:06.826: INFO: namespace e2e-tests-subpath-v9jvr deletion completed in 6.087309657s • [SLOW TEST:48.461 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:34:06.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 26 13:34:06.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-c8p2h' May 26 13:34:07.104: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 26 13:34:07.104: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 26 13:34:09.113: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-g45zq] May 26 13:34:09.113: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-g45zq" in namespace "e2e-tests-kubectl-c8p2h" to be "running and ready" May 26 13:34:09.116: INFO: Pod "e2e-test-nginx-rc-g45zq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.568898ms May 26 13:34:11.119: INFO: Pod "e2e-test-nginx-rc-g45zq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005901394s May 26 13:34:13.124: INFO: Pod "e2e-test-nginx-rc-g45zq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01126882s May 26 13:34:15.128: INFO: Pod "e2e-test-nginx-rc-g45zq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014603695s May 26 13:34:17.130: INFO: Pod "e2e-test-nginx-rc-g45zq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017151896s May 26 13:34:19.134: INFO: Pod "e2e-test-nginx-rc-g45zq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021018548s May 26 13:34:21.138: INFO: Pod "e2e-test-nginx-rc-g45zq": Phase="Running", Reason="", readiness=true. Elapsed: 12.024419704s May 26 13:34:21.138: INFO: Pod "e2e-test-nginx-rc-g45zq" satisfied condition "running and ready" May 26 13:34:21.138: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-g45zq] May 26 13:34:21.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-c8p2h' May 26 13:34:21.325: INFO: stderr: "" May 26 13:34:21.325: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 26 13:34:21.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-c8p2h' May 26 13:34:21.639: INFO: stderr: "" May 26 13:34:21.639: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:34:21.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c8p2h" for this suite. May 26 13:34:43.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:34:43.714: INFO: namespace: e2e-tests-kubectl-c8p2h, resource: bindings, ignored listing per whitelist May 26 13:34:43.743: INFO: namespace e2e-tests-kubectl-c8p2h deletion completed in 22.100510983s • [SLOW TEST:36.917 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:34:43.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:35:00.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-fztdn" for this suite. May 26 13:35:22.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:35:22.979: INFO: namespace: e2e-tests-replication-controller-fztdn, resource: bindings, ignored listing per whitelist May 26 13:35:23.003: INFO: namespace e2e-tests-replication-controller-fztdn deletion completed in 22.07208262s • [SLOW TEST:39.261 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:35:23.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0526 13:36:03.128338 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 13:36:03.128: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:36:03.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mkdl4" for this suite. May 26 13:36:11.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:36:11.174: INFO: namespace: e2e-tests-gc-mkdl4, resource: bindings, ignored listing per whitelist May 26 13:36:11.214: INFO: namespace e2e-tests-gc-mkdl4 deletion completed in 8.082091655s • [SLOW TEST:48.210 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:36:11.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 26 13:36:11.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-ctwnf' May 26 13:36:11.715: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 26 13:36:11.715: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 26 13:36:11.867: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 26 13:36:12.109: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 26 13:36:12.169: INFO: scanned /root for discovery docs: May 26 13:36:12.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-ctwnf' May 26 13:36:35.078: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 26 13:36:35.078: INFO: stdout: "Created e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9\nScaling up e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 26 13:36:35.078: INFO: stdout: "Created e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9\nScaling up e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 26 13:36:35.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ctwnf' May 26 13:36:35.177: INFO: stderr: "" May 26 13:36:35.177: INFO: stdout: "e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9-5cjlr " May 26 13:36:35.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9-5cjlr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ctwnf' May 26 13:36:35.269: INFO: stderr: "" May 26 13:36:35.269: INFO: stdout: "true" May 26 13:36:35.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9-5cjlr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ctwnf' May 26 13:36:35.352: INFO: stderr: "" May 26 13:36:35.352: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 26 13:36:35.352: INFO: e2e-test-nginx-rc-3bc5e7eb5d64863fc62a94c6e4c8e5a9-5cjlr is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 26 13:36:35.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ctwnf' May 26 13:36:35.458: INFO: stderr: "" May 26 13:36:35.458: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:36:35.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ctwnf" for this suite. May 26 13:36:57.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:36:57.576: INFO: namespace: e2e-tests-kubectl-ctwnf, resource: bindings, ignored listing per whitelist May 26 13:36:57.584: INFO: namespace e2e-tests-kubectl-ctwnf deletion completed in 22.07831429s • [SLOW TEST:46.370 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:36:57.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 26 13:36:57.780: INFO: Waiting up to 5m0s for pod "pod-f87458f3-9f55-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-qnrwx" to be "success or failure" May 26 13:36:57.784: INFO: Pod "pod-f87458f3-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.675835ms May 26 13:36:59.788: INFO: Pod "pod-f87458f3-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007393613s May 26 13:37:01.791: INFO: Pod "pod-f87458f3-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010919576s May 26 13:37:03.794: INFO: Pod "pod-f87458f3-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013765232s May 26 13:37:05.811: INFO: Pod "pod-f87458f3-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03034764s May 26 13:37:07.813: INFO: Pod "pod-f87458f3-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.033104359s May 26 13:37:09.816: INFO: Pod "pod-f87458f3-9f55-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.03568459s May 26 13:37:11.819: INFO: Pod "pod-f87458f3-9f55-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.039050659s STEP: Saw pod success May 26 13:37:11.819: INFO: Pod "pod-f87458f3-9f55-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:37:11.821: INFO: Trying to get logs from node hunter-worker pod pod-f87458f3-9f55-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 13:37:11.856: INFO: Waiting for pod pod-f87458f3-9f55-11ea-b1d1-0242ac110018 to disappear May 26 13:37:11.880: INFO: Pod pod-f87458f3-9f55-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:37:11.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qnrwx" for this suite. May 26 13:37:17.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:37:17.961: INFO: namespace: e2e-tests-emptydir-qnrwx, resource: bindings, ignored listing per whitelist May 26 13:37:17.994: INFO: namespace e2e-tests-emptydir-qnrwx deletion completed in 6.11049661s • [SLOW TEST:20.410 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:37:17.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-049d8a1d-9f56-11ea-b1d1-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:37:38.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lv5nl" for this suite. May 26 13:38:00.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:38:00.265: INFO: namespace: e2e-tests-configmap-lv5nl, resource: bindings, ignored listing per whitelist May 26 13:38:00.302: INFO: namespace e2e-tests-configmap-lv5nl deletion completed in 22.092736645s • [SLOW TEST:42.308 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:38:00.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 26 13:38:22.847: INFO: Waiting up to 5m0s for pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018" in namespace "e2e-tests-pods-gdrjm" to be "success or failure" May 26 13:38:22.962: INFO: Pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 115.227843ms May 26 13:38:25.064: INFO: Pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216938962s May 26 13:38:27.067: INFO: Pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220173373s May 26 13:38:29.070: INFO: Pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223494619s May 26 13:38:31.073: INFO: Pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22677359s May 26 13:38:33.077: INFO: Pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.229950004s May 26 13:38:35.080: INFO: Pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.233531838s May 26 13:38:37.084: INFO: Pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 14.236945325s May 26 13:38:39.086: INFO: Pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.239323376s STEP: Saw pod success May 26 13:38:39.086: INFO: Pod "client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:38:39.088: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018 container env3cont: STEP: delete the pod May 26 13:38:39.101: INFO: Waiting for pod client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018 to disappear May 26 13:38:39.106: INFO: Pod client-envvars-2b2808a0-9f56-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:38:39.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-gdrjm" for this suite. May 26 13:39:33.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:39:33.194: INFO: namespace: e2e-tests-pods-gdrjm, resource: bindings, ignored listing per whitelist May 26 13:39:33.194: INFO: namespace e2e-tests-pods-gdrjm deletion completed in 54.085409753s • [SLOW TEST:92.892 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:39:33.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-wk6r5/configmap-test-5530779c-9f56-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 13:39:33.360: INFO: Waiting up to 5m0s for pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018" in namespace "e2e-tests-configmap-wk6r5" to be "success or failure" May 26 13:39:33.364: INFO: Pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.408243ms May 26 13:39:35.368: INFO: Pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007465461s May 26 13:39:37.370: INFO: Pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010182663s May 26 13:39:39.373: INFO: Pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013384979s May 26 13:39:41.376: INFO: Pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016350044s May 26 13:39:43.379: INFO: Pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019254192s May 26 13:39:45.383: INFO: Pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022422728s May 26 13:39:47.385: INFO: Pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 14.02499016s May 26 13:39:49.388: INFO: Pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.028247608s STEP: Saw pod success May 26 13:39:49.388: INFO: Pod "pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:39:49.391: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018 container env-test: STEP: delete the pod May 26 13:39:49.940: INFO: Waiting for pod pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018 to disappear May 26 13:39:49.963: INFO: Pod pod-configmaps-55316db3-9f56-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:39:49.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wk6r5" for this suite. May 26 13:39:56.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:39:56.196: INFO: namespace: e2e-tests-configmap-wk6r5, resource: bindings, ignored listing per whitelist May 26 13:39:56.221: INFO: namespace e2e-tests-configmap-wk6r5 deletion completed in 6.255141426s • [SLOW TEST:23.027 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:39:56.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 13:39:56.300: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-qznvp" to be "success or failure" May 26 13:39:56.335: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.778723ms May 26 13:39:58.338: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038635047s May 26 13:40:00.342: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042223972s May 26 13:40:02.345: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045236045s May 26 13:40:04.686: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386534801s May 26 13:40:06.690: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.389898777s May 26 13:40:08.693: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.393450316s May 26 13:40:11.054: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.753754626s May 26 13:40:13.057: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.756991738s May 26 13:40:16.376: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.076230988s May 26 13:40:20.120: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.819755281s May 26 13:40:22.123: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.823603026s May 26 13:40:24.126: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 27.826651279s May 26 13:40:26.130: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.830090648s May 26 13:40:28.224: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 31.923681856s May 26 13:40:30.293: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.993199176s May 26 13:40:32.296: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 35.995825436s May 26 13:40:34.299: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.999247765s STEP: Saw pod success May 26 13:40:34.299: INFO: Pod "downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:40:34.302: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 13:40:34.368: INFO: Waiting for pod downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018 to disappear May 26 13:40:34.400: INFO: Pod downwardapi-volume-62dde329-9f56-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:40:34.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qznvp" for this suite. May 26 13:40:40.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:40:40.442: INFO: namespace: e2e-tests-downward-api-qznvp, resource: bindings, ignored listing per whitelist May 26 13:40:40.484: INFO: namespace e2e-tests-downward-api-qznvp deletion completed in 6.070993078s • [SLOW TEST:44.263 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:40:40.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-7xdq STEP: Creating a pod to test atomic-volume-subpath May 26 13:40:40.642: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7xdq" in namespace "e2e-tests-subpath-nlrx4" to be "success or failure" May 26 13:40:40.647: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343351ms May 26 13:40:42.649: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007055583s May 26 13:40:44.659: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016473171s May 26 13:40:46.665: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022848353s May 26 13:40:49.016: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.373748323s May 26 13:40:51.020: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.377879168s May 26 13:40:53.024: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.381283726s May 26 13:40:55.031: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.388426139s May 26 13:40:57.034: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.391883082s May 26 13:40:59.038: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.395766561s May 26 13:41:01.042: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.399492711s May 26 13:41:03.045: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 22.403187768s May 26 13:41:05.049: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 24.406224102s May 26 13:41:07.052: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Pending", Reason="", readiness=false. Elapsed: 26.409812703s May 26 13:41:09.114: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Running", Reason="", readiness=true. Elapsed: 28.471701256s May 26 13:41:11.117: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Running", Reason="", readiness=false. Elapsed: 30.475140502s May 26 13:41:13.120: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Running", Reason="", readiness=false. Elapsed: 32.477991414s May 26 13:41:15.124: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Running", Reason="", readiness=false. Elapsed: 34.481444921s May 26 13:41:17.128: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Running", Reason="", readiness=false. Elapsed: 36.485225638s May 26 13:41:19.131: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Running", Reason="", readiness=false. Elapsed: 38.488256214s May 26 13:41:21.259: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Running", Reason="", readiness=false. Elapsed: 40.616935825s May 26 13:41:23.263: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Running", Reason="", readiness=false. Elapsed: 42.620683149s May 26 13:41:25.403: INFO: Pod "pod-subpath-test-configmap-7xdq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.760994016s STEP: Saw pod success May 26 13:41:25.403: INFO: Pod "pod-subpath-test-configmap-7xdq" satisfied condition "success or failure" May 26 13:41:25.407: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-7xdq container test-container-subpath-configmap-7xdq: STEP: delete the pod May 26 13:41:25.723: INFO: Waiting for pod pod-subpath-test-configmap-7xdq to disappear May 26 13:41:26.020: INFO: Pod pod-subpath-test-configmap-7xdq no longer exists STEP: Deleting pod pod-subpath-test-configmap-7xdq May 26 13:41:26.020: INFO: Deleting pod "pod-subpath-test-configmap-7xdq" in namespace "e2e-tests-subpath-nlrx4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:41:26.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-nlrx4" for this suite. May 26 13:41:36.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:41:36.095: INFO: namespace: e2e-tests-subpath-nlrx4, resource: bindings, ignored listing per whitelist May 26 13:41:36.106: INFO: namespace e2e-tests-subpath-nlrx4 deletion completed in 10.080551036s • [SLOW TEST:55.622 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:41:36.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 26 13:41:36.244: INFO: Waiting up to 5m0s for pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018" in namespace "e2e-tests-emptydir-q8gbr" to be "success or failure" May 26 13:41:36.247: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.594682ms May 26 13:41:38.251: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00710364s May 26 13:41:40.254: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010105889s May 26 13:41:42.672: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428119054s May 26 13:41:44.675: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.431279626s May 26 13:41:46.678: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.434732141s May 26 13:41:49.480: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.235983036s May 26 13:41:51.484: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.239859263s May 26 13:41:53.487: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.243333013s May 26 13:41:55.490: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.246591464s May 26 13:41:57.496: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.252017946s May 26 13:42:01.231: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 24.986863878s May 26 13:42:03.233: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.989808763s May 26 13:42:06.597: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.353369913s May 26 13:42:09.228: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.984177398s May 26 13:42:11.231: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.987820407s May 26 13:42:13.301: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.057276901s May 26 13:42:18.607: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 42.363375323s May 26 13:42:20.610: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 44.366097581s May 26 13:42:22.786: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 46.542029409s May 26 13:42:24.954: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 48.710139094s May 26 13:42:26.977: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 50.73332941s May 26 13:42:29.187: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 52.943473098s May 26 13:42:31.190: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 54.946495984s STEP: Saw pod success May 26 13:42:31.190: INFO: Pod "pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:42:31.192: INFO: Trying to get logs from node hunter-worker2 pod pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018 container test-container: STEP: delete the pod May 26 13:42:31.278: INFO: Waiting for pod pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018 to disappear May 26 13:42:31.284: INFO: Pod pod-9e6faaf1-9f56-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:42:31.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-q8gbr" for this suite. May 26 13:42:37.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:42:37.359: INFO: namespace: e2e-tests-emptydir-q8gbr, resource: bindings, ignored listing per whitelist May 26 13:42:37.382: INFO: namespace e2e-tests-emptydir-q8gbr deletion completed in 6.094271572s • [SLOW TEST:61.275 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:42:37.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 26 13:42:37.502: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix970610503/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:42:37.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fcw9q" for this suite. May 26 13:42:43.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:42:43.646: INFO: namespace: e2e-tests-kubectl-fcw9q, resource: bindings, ignored listing per whitelist May 26 13:42:43.686: INFO: namespace e2e-tests-kubectl-fcw9q deletion completed in 6.092249153s • [SLOW TEST:6.305 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:42:43.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 26 13:42:43.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9v7ln' May 26 13:42:46.657: INFO: stderr: "" May 26 13:42:46.657: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 26 13:42:47.661: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:47.661: INFO: Found 0 / 1 May 26 13:42:48.661: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:48.661: INFO: Found 0 / 1 May 26 13:42:49.660: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:49.660: INFO: Found 0 / 1 May 26 13:42:51.140: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:51.140: INFO: Found 0 / 1 May 26 13:42:51.660: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:51.660: INFO: Found 0 / 1 May 26 13:42:52.661: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:52.661: INFO: Found 0 / 1 May 26 13:42:53.661: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:53.661: INFO: Found 0 / 1 May 26 13:42:54.661: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:54.661: INFO: Found 0 / 1 May 26 13:42:56.080: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:56.080: INFO: Found 0 / 1 May 26 13:42:56.661: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:56.661: INFO: Found 0 / 1 May 26 13:42:57.804: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:57.804: INFO: Found 0 / 1 May 26 13:42:58.661: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:58.661: INFO: Found 0 / 1 May 26 13:42:59.944: INFO: Selector matched 1 pods for map[app:redis] May 26 13:42:59.944: INFO: Found 0 / 1 May 26 13:43:00.660: INFO: Selector matched 1 pods for map[app:redis] May 26 13:43:00.660: INFO: Found 0 / 1 May 26 13:43:01.666: INFO: Selector matched 1 pods for map[app:redis] May 26 13:43:01.666: INFO: Found 0 / 1 May 26 13:43:02.662: INFO: Selector matched 1 pods for map[app:redis] May 26 13:43:02.662: INFO: Found 0 / 1 May 26 13:43:03.660: INFO: Selector matched 1 pods for map[app:redis] May 26 13:43:03.660: INFO: Found 0 / 1 May 26 13:43:04.661: INFO: Selector matched 1 pods for map[app:redis] May 26 13:43:04.661: INFO: Found 1 / 1 May 26 13:43:04.661: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 26 13:43:04.664: INFO: Selector matched 1 pods for map[app:redis] May 26 13:43:04.664: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 26 13:43:04.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sk5pl redis-master --namespace=e2e-tests-kubectl-9v7ln' May 26 13:43:04.778: INFO: stderr: "" May 26 13:43:04.778: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 26 May 13:43:03.007 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 May 13:43:03.034 # Server started, Redis version 3.2.12\n1:M 26 May 13:43:03.034 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 May 13:43:03.034 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 26 13:43:04.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-sk5pl redis-master --namespace=e2e-tests-kubectl-9v7ln --tail=1' May 26 13:43:04.869: INFO: stderr: "" May 26 13:43:04.869: INFO: stdout: "1:M 26 May 13:43:03.034 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 26 13:43:04.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-sk5pl redis-master --namespace=e2e-tests-kubectl-9v7ln --limit-bytes=1' May 26 13:43:04.975: INFO: stderr: "" May 26 13:43:04.975: INFO: stdout: " " STEP: exposing timestamps May 26 13:43:04.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-sk5pl redis-master --namespace=e2e-tests-kubectl-9v7ln --tail=1 --timestamps' May 26 13:43:05.086: INFO: stderr: "" May 26 13:43:05.086: INFO: stdout: "2020-05-26T13:43:03.034881773Z 1:M 26 May 13:43:03.034 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 26 13:43:07.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-sk5pl redis-master --namespace=e2e-tests-kubectl-9v7ln --since=1s' May 26 13:43:07.889: INFO: stderr: "" May 26 13:43:07.889: INFO: stdout: "" May 26 13:43:07.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-sk5pl redis-master --namespace=e2e-tests-kubectl-9v7ln --since=24h' May 26 13:43:07.989: INFO: stderr: "" May 26 13:43:07.989: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 26 May 13:43:03.007 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 May 13:43:03.034 # Server started, Redis version 3.2.12\n1:M 26 May 13:43:03.034 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 May 13:43:03.034 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 26 13:43:07.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-9v7ln' May 26 13:43:08.331: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 13:43:08.331: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 26 13:43:08.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-9v7ln' May 26 13:43:08.572: INFO: stderr: "No resources found.\n" May 26 13:43:08.572: INFO: stdout: "" May 26 13:43:08.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-9v7ln -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 26 13:43:08.842: INFO: stderr: "" May 26 13:43:08.842: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:43:08.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9v7ln" for this suite. May 26 13:43:30.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:43:30.927: INFO: namespace: e2e-tests-kubectl-9v7ln, resource: bindings, ignored listing per whitelist May 26 13:43:30.943: INFO: namespace e2e-tests-kubectl-9v7ln deletion completed in 22.098011472s • [SLOW TEST:47.257 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:43:30.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 26 13:43:31.036: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 26 13:43:31.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:43:31.429: INFO: stderr: "" May 26 13:43:31.429: INFO: stdout: "service/redis-slave created\n" May 26 13:43:31.429: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 26 13:43:31.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:43:31.767: INFO: stderr: "" May 26 13:43:31.767: INFO: stdout: "service/redis-master created\n" May 26 13:43:31.767: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 26 13:43:31.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:43:32.299: INFO: stderr: "" May 26 13:43:32.299: INFO: stdout: "service/frontend created\n" May 26 13:43:32.299: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 26 13:43:32.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:43:35.145: INFO: stderr: "" May 26 13:43:35.145: INFO: stdout: "deployment.extensions/frontend created\n" May 26 13:43:35.145: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 26 13:43:35.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:43:35.854: INFO: stderr: "" May 26 13:43:35.854: INFO: stdout: "deployment.extensions/redis-master created\n" May 26 13:43:35.854: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 26 13:43:35.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:43:37.305: INFO: stderr: "" May 26 13:43:37.305: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 26 13:43:37.305: INFO: Waiting for all frontend pods to be Running. May 26 13:44:07.356: INFO: Waiting for frontend to serve content. May 26 13:44:07.427: INFO: Trying to add a new entry to the guestbook. May 26 13:44:07.483: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 26 13:44:07.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:44:07.817: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 13:44:07.817: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 26 13:44:07.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:44:07.973: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 13:44:07.973: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 26 13:44:07.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:44:08.100: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 13:44:08.100: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 26 13:44:08.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:44:08.201: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 13:44:08.201: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 26 13:44:08.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:44:08.351: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 13:44:08.351: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 26 13:44:08.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rnh2x' May 26 13:44:08.478: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 13:44:08.478: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:44:08.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rnh2x" for this suite. May 26 13:44:54.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:44:54.520: INFO: namespace: e2e-tests-kubectl-rnh2x, resource: bindings, ignored listing per whitelist May 26 13:44:54.565: INFO: namespace e2e-tests-kubectl-rnh2x deletion completed in 46.08379958s • [SLOW TEST:83.622 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:44:54.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:45:11.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4wqlh" for this suite. May 26 13:45:17.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:45:17.841: INFO: namespace: e2e-tests-emptydir-wrapper-4wqlh, resource: bindings, ignored listing per whitelist May 26 13:45:17.866: INFO: namespace e2e-tests-emptydir-wrapper-4wqlh deletion completed in 6.837258433s • [SLOW TEST:23.300 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:45:17.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 26 13:45:18.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-bw6v4' May 26 13:45:18.218: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 26 13:45:18.218: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 26 13:45:22.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-bw6v4' May 26 13:45:22.408: INFO: stderr: "" May 26 13:45:22.408: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:45:22.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bw6v4" for this suite. May 26 13:45:36.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:45:36.940: INFO: namespace: e2e-tests-kubectl-bw6v4, resource: bindings, ignored listing per whitelist May 26 13:45:36.954: INFO: namespace e2e-tests-kubectl-bw6v4 deletion completed in 14.541695241s • [SLOW TEST:19.089 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:45:36.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:45:43.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-mtdd5" for this suite. May 26 13:45:49.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:45:49.532: INFO: namespace: e2e-tests-namespaces-mtdd5, resource: bindings, ignored listing per whitelist May 26 13:45:49.644: INFO: namespace e2e-tests-namespaces-mtdd5 deletion completed in 6.165212873s STEP: Destroying namespace "e2e-tests-nsdeletetest-sbgzg" for this suite. May 26 13:45:49.668: INFO: Namespace e2e-tests-nsdeletetest-sbgzg was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-5cg5x" for this suite. May 26 13:45:55.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:45:55.844: INFO: namespace: e2e-tests-nsdeletetest-5cg5x, resource: bindings, ignored listing per whitelist May 26 13:45:55.865: INFO: namespace e2e-tests-nsdeletetest-5cg5x deletion completed in 6.197120902s • [SLOW TEST:18.910 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:45:55.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-c4ssl [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-c4ssl STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-c4ssl May 26 13:45:59.642: INFO: Found 0 stateful pods, waiting for 1 May 26 13:46:09.793: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 26 13:46:19.646: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 26 13:46:19.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c4ssl ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 26 13:46:20.477: INFO: stderr: "I0526 13:46:19.767268 3950 log.go:172] (0xc00013a630) (0xc00088a3c0) Create stream\nI0526 13:46:19.767325 3950 log.go:172] (0xc00013a630) (0xc00088a3c0) Stream added, broadcasting: 1\nI0526 13:46:19.772163 3950 log.go:172] (0xc00013a630) Reply frame received for 1\nI0526 13:46:19.772202 3950 log.go:172] (0xc00013a630) (0xc0007ceb40) Create stream\nI0526 13:46:19.772213 3950 log.go:172] (0xc00013a630) (0xc0007ceb40) Stream added, broadcasting: 3\nI0526 13:46:19.772834 3950 log.go:172] (0xc00013a630) Reply frame received for 3\nI0526 13:46:19.772854 3950 log.go:172] (0xc00013a630) (0xc0007cebe0) Create stream\nI0526 13:46:19.772860 3950 log.go:172] (0xc00013a630) (0xc0007cebe0) Stream added, broadcasting: 5\nI0526 13:46:19.773536 3950 log.go:172] (0xc00013a630) Reply frame received for 5\nI0526 13:46:20.470276 3950 log.go:172] (0xc00013a630) Data frame received for 5\nI0526 13:46:20.470300 3950 log.go:172] (0xc0007cebe0) (5) Data frame handling\nI0526 13:46:20.470334 3950 log.go:172] (0xc00013a630) Data frame received for 3\nI0526 13:46:20.470401 3950 log.go:172] (0xc0007ceb40) (3) Data frame handling\nI0526 13:46:20.470442 3950 log.go:172] (0xc0007ceb40) (3) Data frame sent\nI0526 13:46:20.470580 3950 log.go:172] (0xc00013a630) Data frame received for 3\nI0526 13:46:20.470622 3950 log.go:172] (0xc0007ceb40) (3) Data frame handling\nI0526 13:46:20.472126 3950 log.go:172] (0xc00013a630) Data frame received for 1\nI0526 13:46:20.472144 3950 log.go:172] (0xc00088a3c0) (1) Data frame handling\nI0526 13:46:20.472160 3950 log.go:172] (0xc00088a3c0) (1) Data frame sent\nI0526 13:46:20.472175 3950 log.go:172] (0xc00013a630) (0xc00088a3c0) Stream removed, broadcasting: 1\nI0526 13:46:20.472202 3950 log.go:172] (0xc00013a630) Go away received\nI0526 13:46:20.472354 3950 log.go:172] (0xc00013a630) (0xc00088a3c0) Stream removed, broadcasting: 1\nI0526 13:46:20.472380 3950 log.go:172] (0xc00013a630) (0xc0007ceb40) Stream removed, broadcasting: 3\nI0526 13:46:20.472395 3950 log.go:172] (0xc00013a630) (0xc0007cebe0) Stream removed, broadcasting: 5\n" May 26 13:46:20.478: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 26 13:46:20.478: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 26 13:46:20.481: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 26 13:46:30.484: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 13:46:30.484: INFO: Waiting for statefulset status.replicas updated to 0 May 26 13:46:30.646: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999584s May 26 13:46:31.670: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.844852728s May 26 13:46:32.736: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.821327613s May 26 13:46:33.739: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.755142721s May 26 13:46:34.918: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.752156821s May 26 13:46:36.097: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.572975548s May 26 13:46:37.101: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.394203208s May 26 13:46:38.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.389770429s May 26 13:46:39.134: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.360504004s May 26 13:46:40.161: INFO: Verifying statefulset ss doesn't scale past 1 for another 356.989224ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-c4ssl May 26 13:46:41.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c4ssl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 13:46:41.704: INFO: stderr: "I0526 13:46:41.282374 3972 log.go:172] (0xc0008382c0) (0xc000718640) Create stream\nI0526 13:46:41.282414 3972 log.go:172] (0xc0008382c0) (0xc000718640) Stream added, broadcasting: 1\nI0526 13:46:41.284341 3972 log.go:172] (0xc0008382c0) Reply frame received for 1\nI0526 13:46:41.284378 3972 log.go:172] (0xc0008382c0) (0xc00063ac80) Create stream\nI0526 13:46:41.284389 3972 log.go:172] (0xc0008382c0) (0xc00063ac80) Stream added, broadcasting: 3\nI0526 13:46:41.285260 3972 log.go:172] (0xc0008382c0) Reply frame received for 3\nI0526 13:46:41.285298 3972 log.go:172] (0xc0008382c0) (0xc0007186e0) Create stream\nI0526 13:46:41.285305 3972 log.go:172] (0xc0008382c0) (0xc0007186e0) Stream added, broadcasting: 5\nI0526 13:46:41.285965 3972 log.go:172] (0xc0008382c0) Reply frame received for 5\nI0526 13:46:41.697787 3972 log.go:172] (0xc0008382c0) Data frame received for 5\nI0526 13:46:41.697817 3972 log.go:172] (0xc0007186e0) (5) Data frame handling\nI0526 13:46:41.697838 3972 log.go:172] (0xc0008382c0) Data frame received for 3\nI0526 13:46:41.697846 3972 log.go:172] (0xc00063ac80) (3) Data frame handling\nI0526 13:46:41.697856 3972 log.go:172] (0xc00063ac80) (3) Data frame sent\nI0526 13:46:41.697865 3972 log.go:172] (0xc0008382c0) Data frame received for 3\nI0526 13:46:41.697872 3972 log.go:172] (0xc00063ac80) (3) Data frame handling\nI0526 13:46:41.698966 3972 log.go:172] (0xc0008382c0) Data frame received for 1\nI0526 13:46:41.698981 3972 log.go:172] (0xc000718640) (1) Data frame handling\nI0526 13:46:41.698992 3972 log.go:172] (0xc000718640) (1) Data frame sent\nI0526 13:46:41.699003 3972 log.go:172] (0xc0008382c0) (0xc000718640) Stream removed, broadcasting: 1\nI0526 13:46:41.699018 3972 log.go:172] (0xc0008382c0) Go away received\nI0526 13:46:41.699289 3972 log.go:172] (0xc0008382c0) (0xc000718640) Stream removed, broadcasting: 1\nI0526 13:46:41.699326 3972 log.go:172] (0xc0008382c0) (0xc00063ac80) Stream removed, broadcasting: 3\nI0526 13:46:41.699352 3972 log.go:172] (0xc0008382c0) (0xc0007186e0) Stream removed, broadcasting: 5\n" May 26 13:46:41.704: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 26 13:46:41.704: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 26 13:46:41.707: INFO: Found 1 stateful pods, waiting for 3 May 26 13:46:51.711: INFO: Found 2 stateful pods, waiting for 3 May 26 13:47:01.910: INFO: Found 2 stateful pods, waiting for 3 May 26 13:47:11.711: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 26 13:47:11.711: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 26 13:47:11.711: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 26 13:47:21.712: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 26 13:47:21.712: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 26 13:47:21.712: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 26 13:47:21.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c4ssl ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 26 13:47:22.024: INFO: stderr: "I0526 13:47:21.831296 3994 log.go:172] (0xc00014c840) (0xc000728640) Create stream\nI0526 13:47:21.831344 3994 log.go:172] (0xc00014c840) (0xc000728640) Stream added, broadcasting: 1\nI0526 13:47:21.833001 3994 log.go:172] (0xc00014c840) Reply frame received for 1\nI0526 13:47:21.833040 3994 log.go:172] (0xc00014c840) (0xc0005dcdc0) Create stream\nI0526 13:47:21.833052 3994 log.go:172] (0xc00014c840) (0xc0005dcdc0) Stream added, broadcasting: 3\nI0526 13:47:21.833933 3994 log.go:172] (0xc00014c840) Reply frame received for 3\nI0526 13:47:21.833986 3994 log.go:172] (0xc00014c840) (0xc0007286e0) Create stream\nI0526 13:47:21.834011 3994 log.go:172] (0xc00014c840) (0xc0007286e0) Stream added, broadcasting: 5\nI0526 13:47:21.834846 3994 log.go:172] (0xc00014c840) Reply frame received for 5\nI0526 13:47:22.019631 3994 log.go:172] (0xc00014c840) Data frame received for 3\nI0526 13:47:22.019660 3994 log.go:172] (0xc0005dcdc0) (3) Data frame handling\nI0526 13:47:22.019673 3994 log.go:172] (0xc00014c840) Data frame received for 5\nI0526 13:47:22.019691 3994 log.go:172] (0xc0007286e0) (5) Data frame handling\nI0526 13:47:22.019719 3994 log.go:172] (0xc0005dcdc0) (3) Data frame sent\nI0526 13:47:22.019732 3994 log.go:172] (0xc00014c840) Data frame received for 3\nI0526 13:47:22.019739 3994 log.go:172] (0xc0005dcdc0) (3) Data frame handling\nI0526 13:47:22.020662 3994 log.go:172] (0xc00014c840) Data frame received for 1\nI0526 13:47:22.020729 3994 log.go:172] (0xc000728640) (1) Data frame handling\nI0526 13:47:22.020761 3994 log.go:172] (0xc000728640) (1) Data frame sent\nI0526 13:47:22.020810 3994 log.go:172] (0xc00014c840) (0xc000728640) Stream removed, broadcasting: 1\nI0526 13:47:22.020850 3994 log.go:172] (0xc00014c840) Go away received\nI0526 13:47:22.021040 3994 log.go:172] (0xc00014c840) (0xc000728640) Stream removed, broadcasting: 1\nI0526 13:47:22.021062 3994 log.go:172] (0xc00014c840) (0xc0005dcdc0) Stream removed, broadcasting: 3\nI0526 13:47:22.021084 3994 log.go:172] (0xc00014c840) (0xc0007286e0) Stream removed, broadcasting: 5\n" May 26 13:47:22.024: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 26 13:47:22.024: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 26 13:47:22.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c4ssl ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 26 13:47:22.504: INFO: stderr: "I0526 13:47:22.173045 4016 log.go:172] (0xc000138840) (0xc00075a640) Create stream\nI0526 13:47:22.173415 4016 log.go:172] (0xc000138840) (0xc00075a640) Stream added, broadcasting: 1\nI0526 13:47:22.176393 4016 log.go:172] (0xc000138840) Reply frame received for 1\nI0526 13:47:22.176423 4016 log.go:172] (0xc000138840) (0xc0001f6d20) Create stream\nI0526 13:47:22.176433 4016 log.go:172] (0xc000138840) (0xc0001f6d20) Stream added, broadcasting: 3\nI0526 13:47:22.177024 4016 log.go:172] (0xc000138840) Reply frame received for 3\nI0526 13:47:22.177048 4016 log.go:172] (0xc000138840) (0xc00075a6e0) Create stream\nI0526 13:47:22.177055 4016 log.go:172] (0xc000138840) (0xc00075a6e0) Stream added, broadcasting: 5\nI0526 13:47:22.177856 4016 log.go:172] (0xc000138840) Reply frame received for 5\nI0526 13:47:22.500493 4016 log.go:172] (0xc000138840) Data frame received for 3\nI0526 13:47:22.500588 4016 log.go:172] (0xc0001f6d20) (3) Data frame handling\nI0526 13:47:22.500611 4016 log.go:172] (0xc0001f6d20) (3) Data frame sent\nI0526 13:47:22.500798 4016 log.go:172] (0xc000138840) Data frame received for 5\nI0526 13:47:22.500818 4016 log.go:172] (0xc00075a6e0) (5) Data frame handling\nI0526 13:47:22.500835 4016 log.go:172] (0xc000138840) Data frame received for 3\nI0526 13:47:22.500842 4016 log.go:172] (0xc0001f6d20) (3) Data frame handling\nI0526 13:47:22.502354 4016 log.go:172] (0xc000138840) Data frame received for 1\nI0526 13:47:22.502364 4016 log.go:172] (0xc00075a640) (1) Data frame handling\nI0526 13:47:22.502380 4016 log.go:172] (0xc00075a640) (1) Data frame sent\nI0526 13:47:22.502388 4016 log.go:172] (0xc000138840) (0xc00075a640) Stream removed, broadcasting: 1\nI0526 13:47:22.502471 4016 log.go:172] (0xc000138840) Go away received\nI0526 13:47:22.502524 4016 log.go:172] (0xc000138840) (0xc00075a640) Stream removed, broadcasting: 1\nI0526 13:47:22.502545 4016 log.go:172] (0xc000138840) (0xc0001f6d20) Stream removed, broadcasting: 3\nI0526 13:47:22.502560 4016 log.go:172] (0xc000138840) (0xc00075a6e0) Stream removed, broadcasting: 5\n" May 26 13:47:22.505: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 26 13:47:22.505: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 26 13:47:22.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c4ssl ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 26 13:47:23.150: INFO: stderr: "I0526 13:47:22.640050 4038 log.go:172] (0xc00015c6e0) (0xc00065d5e0) Create stream\nI0526 13:47:22.640096 4038 log.go:172] (0xc00015c6e0) (0xc00065d5e0) Stream added, broadcasting: 1\nI0526 13:47:22.642294 4038 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0526 13:47:22.642337 4038 log.go:172] (0xc00015c6e0) (0xc0003ce000) Create stream\nI0526 13:47:22.642351 4038 log.go:172] (0xc00015c6e0) (0xc0003ce000) Stream added, broadcasting: 3\nI0526 13:47:22.643172 4038 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0526 13:47:22.643216 4038 log.go:172] (0xc00015c6e0) (0xc000756000) Create stream\nI0526 13:47:22.643230 4038 log.go:172] (0xc00015c6e0) (0xc000756000) Stream added, broadcasting: 5\nI0526 13:47:22.643992 4038 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0526 13:47:23.145492 4038 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0526 13:47:23.145522 4038 log.go:172] (0xc0003ce000) (3) Data frame handling\nI0526 13:47:23.145531 4038 log.go:172] (0xc0003ce000) (3) Data frame sent\nI0526 13:47:23.145537 4038 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0526 13:47:23.145542 4038 log.go:172] (0xc0003ce000) (3) Data frame handling\nI0526 13:47:23.145564 4038 log.go:172] (0xc00015c6e0) Data frame received for 5\nI0526 13:47:23.145573 4038 log.go:172] (0xc000756000) (5) Data frame handling\nI0526 13:47:23.146414 4038 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0526 13:47:23.146424 4038 log.go:172] (0xc00065d5e0) (1) Data frame handling\nI0526 13:47:23.146430 4038 log.go:172] (0xc00065d5e0) (1) Data frame sent\nI0526 13:47:23.146436 4038 log.go:172] (0xc00015c6e0) (0xc00065d5e0) Stream removed, broadcasting: 1\nI0526 13:47:23.146445 4038 log.go:172] (0xc00015c6e0) Go away received\nI0526 13:47:23.146619 4038 log.go:172] (0xc00015c6e0) (0xc00065d5e0) Stream removed, broadcasting: 1\nI0526 13:47:23.146639 4038 log.go:172] (0xc00015c6e0) (0xc0003ce000) Stream removed, broadcasting: 3\nI0526 13:47:23.146649 4038 log.go:172] (0xc00015c6e0) (0xc000756000) Stream removed, broadcasting: 5\n" May 26 13:47:23.150: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 26 13:47:23.150: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 26 13:47:23.150: INFO: Waiting for statefulset status.replicas updated to 0 May 26 13:47:23.152: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 26 13:47:33.159: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 13:47:33.159: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 26 13:47:33.159: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 26 13:47:33.168: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999239s May 26 13:47:34.172: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996102746s May 26 13:47:35.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991722598s May 26 13:47:36.181: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986771037s May 26 13:47:37.186: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.98295303s May 26 13:47:38.190: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978295226s May 26 13:47:40.193: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.974086739s May 26 13:47:41.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.970896066s May 26 13:47:43.192: INFO: Verifying statefulset ss doesn't scale past 3 for another 301.009913ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-c4ssl May 26 13:47:44.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c4ssl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 13:47:44.912: INFO: stderr: "I0526 13:47:44.360554 4060 log.go:172] (0xc00013a580) (0xc000709360) Create stream\nI0526 13:47:44.360592 4060 log.go:172] (0xc00013a580) (0xc000709360) Stream added, broadcasting: 1\nI0526 13:47:44.362765 4060 log.go:172] (0xc00013a580) Reply frame received for 1\nI0526 13:47:44.362797 4060 log.go:172] (0xc00013a580) (0xc0004d0000) Create stream\nI0526 13:47:44.362806 4060 log.go:172] (0xc00013a580) (0xc0004d0000) Stream added, broadcasting: 3\nI0526 13:47:44.363366 4060 log.go:172] (0xc00013a580) Reply frame received for 3\nI0526 13:47:44.363392 4060 log.go:172] (0xc00013a580) (0xc000709400) Create stream\nI0526 13:47:44.363399 4060 log.go:172] (0xc00013a580) (0xc000709400) Stream added, broadcasting: 5\nI0526 13:47:44.363971 4060 log.go:172] (0xc00013a580) Reply frame received for 5\nI0526 13:47:44.907567 4060 log.go:172] (0xc00013a580) Data frame received for 5\nI0526 13:47:44.907594 4060 log.go:172] (0xc000709400) (5) Data frame handling\nI0526 13:47:44.907624 4060 log.go:172] (0xc00013a580) Data frame received for 3\nI0526 13:47:44.907633 4060 log.go:172] (0xc0004d0000) (3) Data frame handling\nI0526 13:47:44.907639 4060 log.go:172] (0xc0004d0000) (3) Data frame sent\nI0526 13:47:44.907670 4060 log.go:172] (0xc00013a580) Data frame received for 3\nI0526 13:47:44.907689 4060 log.go:172] (0xc0004d0000) (3) Data frame handling\nI0526 13:47:44.908783 4060 log.go:172] (0xc00013a580) Data frame received for 1\nI0526 13:47:44.908796 4060 log.go:172] (0xc000709360) (1) Data frame handling\nI0526 13:47:44.908808 4060 log.go:172] (0xc000709360) (1) Data frame sent\nI0526 13:47:44.908814 4060 log.go:172] (0xc00013a580) (0xc000709360) Stream removed, broadcasting: 1\nI0526 13:47:44.908912 4060 log.go:172] (0xc00013a580) (0xc000709360) Stream removed, broadcasting: 1\nI0526 13:47:44.908929 4060 log.go:172] (0xc00013a580) (0xc0004d0000) Stream removed, broadcasting: 3\nI0526 13:47:44.908935 4060 log.go:172] (0xc00013a580) (0xc000709400) Stream removed, broadcasting: 5\nI0526 13:47:44.908962 4060 log.go:172] (0xc00013a580) Go away received\n" May 26 13:47:44.912: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 26 13:47:44.912: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 26 13:47:44.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c4ssl ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 13:47:45.454: INFO: stderr: "I0526 13:47:45.027967 4082 log.go:172] (0xc000770160) (0xc0006d6640) Create stream\nI0526 13:47:45.028013 4082 log.go:172] (0xc000770160) (0xc0006d6640) Stream added, broadcasting: 1\nI0526 13:47:45.030438 4082 log.go:172] (0xc000770160) Reply frame received for 1\nI0526 13:47:45.030478 4082 log.go:172] (0xc000770160) (0xc0007a0e60) Create stream\nI0526 13:47:45.030501 4082 log.go:172] (0xc000770160) (0xc0007a0e60) Stream added, broadcasting: 3\nI0526 13:47:45.031329 4082 log.go:172] (0xc000770160) Reply frame received for 3\nI0526 13:47:45.031351 4082 log.go:172] (0xc000770160) (0xc0006d66e0) Create stream\nI0526 13:47:45.031360 4082 log.go:172] (0xc000770160) (0xc0006d66e0) Stream added, broadcasting: 5\nI0526 13:47:45.032268 4082 log.go:172] (0xc000770160) Reply frame received for 5\nI0526 13:47:45.448034 4082 log.go:172] (0xc000770160) Data frame received for 5\nI0526 13:47:45.448061 4082 log.go:172] (0xc0006d66e0) (5) Data frame handling\nI0526 13:47:45.448080 4082 log.go:172] (0xc000770160) Data frame received for 3\nI0526 13:47:45.448089 4082 log.go:172] (0xc0007a0e60) (3) Data frame handling\nI0526 13:47:45.448097 4082 log.go:172] (0xc0007a0e60) (3) Data frame sent\nI0526 13:47:45.448106 4082 log.go:172] (0xc000770160) Data frame received for 3\nI0526 13:47:45.448113 4082 log.go:172] (0xc0007a0e60) (3) Data frame handling\nI0526 13:47:45.450616 4082 log.go:172] (0xc000770160) Data frame received for 1\nI0526 13:47:45.450633 4082 log.go:172] (0xc0006d6640) (1) Data frame handling\nI0526 13:47:45.450652 4082 log.go:172] (0xc0006d6640) (1) Data frame sent\nI0526 13:47:45.450696 4082 log.go:172] (0xc000770160) (0xc0006d6640) Stream removed, broadcasting: 1\nI0526 13:47:45.450799 4082 log.go:172] (0xc000770160) Go away received\nI0526 13:47:45.450928 4082 log.go:172] (0xc000770160) (0xc0006d6640) Stream removed, broadcasting: 1\nI0526 13:47:45.450950 4082 log.go:172] (0xc000770160) (0xc0007a0e60) Stream removed, broadcasting: 3\nI0526 13:47:45.450968 4082 log.go:172] (0xc000770160) (0xc0006d66e0) Stream removed, broadcasting: 5\n" May 26 13:47:45.454: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 26 13:47:45.454: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 26 13:47:45.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c4ssl ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 26 13:47:46.952: INFO: stderr: "I0526 13:47:46.637309 4104 log.go:172] (0xc0007e42c0) (0xc000746640) Create stream\nI0526 13:47:46.637366 4104 log.go:172] (0xc0007e42c0) (0xc000746640) Stream added, broadcasting: 1\nI0526 13:47:46.639032 4104 log.go:172] (0xc0007e42c0) Reply frame received for 1\nI0526 13:47:46.639061 4104 log.go:172] (0xc0007e42c0) (0xc0003d9040) Create stream\nI0526 13:47:46.639068 4104 log.go:172] (0xc0007e42c0) (0xc0003d9040) Stream added, broadcasting: 3\nI0526 13:47:46.639551 4104 log.go:172] (0xc0007e42c0) Reply frame received for 3\nI0526 13:47:46.639572 4104 log.go:172] (0xc0007e42c0) (0xc0000ea000) Create stream\nI0526 13:47:46.639579 4104 log.go:172] (0xc0007e42c0) (0xc0000ea000) Stream added, broadcasting: 5\nI0526 13:47:46.640075 4104 log.go:172] (0xc0007e42c0) Reply frame received for 5\nI0526 13:47:46.946552 4104 log.go:172] (0xc0007e42c0) Data frame received for 5\nI0526 13:47:46.946616 4104 log.go:172] (0xc0007e42c0) Data frame received for 3\nI0526 13:47:46.946648 4104 log.go:172] (0xc0003d9040) (3) Data frame handling\nI0526 13:47:46.946783 4104 log.go:172] (0xc0003d9040) (3) Data frame sent\nI0526 13:47:46.946798 4104 log.go:172] (0xc0007e42c0) Data frame received for 3\nI0526 13:47:46.946866 4104 log.go:172] (0xc0003d9040) (3) Data frame handling\nI0526 13:47:46.946881 4104 log.go:172] (0xc0000ea000) (5) Data frame handling\nI0526 13:47:46.947745 4104 log.go:172] (0xc0007e42c0) Data frame received for 1\nI0526 13:47:46.947755 4104 log.go:172] (0xc000746640) (1) Data frame handling\nI0526 13:47:46.947763 4104 log.go:172] (0xc000746640) (1) Data frame sent\nI0526 13:47:46.947770 4104 log.go:172] (0xc0007e42c0) (0xc000746640) Stream removed, broadcasting: 1\nI0526 13:47:46.947903 4104 log.go:172] (0xc0007e42c0) (0xc000746640) Stream removed, broadcasting: 1\nI0526 13:47:46.947963 4104 log.go:172] (0xc0007e42c0) (0xc0003d9040) Stream removed, broadcasting: 3\nI0526 13:47:46.947976 4104 log.go:172] (0xc0007e42c0) (0xc0000ea000) Stream removed, broadcasting: 5\n" May 26 13:47:46.952: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 26 13:47:46.952: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 26 13:47:46.952: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 26 13:48:26.963: INFO: Deleting all statefulset in ns e2e-tests-statefulset-c4ssl May 26 13:48:26.965: INFO: Scaling statefulset ss to 0 May 26 13:48:26.972: INFO: Waiting for statefulset status.replicas updated to 0 May 26 13:48:26.974: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:48:27.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-c4ssl" for this suite. May 26 13:48:37.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:48:37.167: INFO: namespace: e2e-tests-statefulset-c4ssl, resource: bindings, ignored listing per whitelist May 26 13:48:37.186: INFO: namespace e2e-tests-statefulset-c4ssl deletion completed in 10.115542874s • [SLOW TEST:161.321 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:48:37.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-9ab24fd0-9f57-11ea-b1d1-0242ac110018 STEP: Creating secret with name s-test-opt-upd-9ab25037-9f57-11ea-b1d1-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9ab24fd0-9f57-11ea-b1d1-0242ac110018 STEP: Updating secret s-test-opt-upd-9ab25037-9f57-11ea-b1d1-0242ac110018 STEP: Creating secret with name s-test-opt-create-9ab2505e-9f57-11ea-b1d1-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:50:35.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ktnt5" for this suite. May 26 13:51:13.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:51:13.151: INFO: namespace: e2e-tests-secrets-ktnt5, resource: bindings, ignored listing per whitelist May 26 13:51:13.165: INFO: namespace e2e-tests-secrets-ktnt5 deletion completed in 38.103720995s • [SLOW TEST:155.978 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:51:13.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-pfg6 STEP: Creating a pod to test atomic-volume-subpath May 26 13:51:13.335: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pfg6" in namespace "e2e-tests-subpath-852pt" to be "success or failure" May 26 13:51:13.340: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.846605ms May 26 13:51:15.344: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008788108s May 26 13:51:17.347: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011427242s May 26 13:51:19.350: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015058141s May 26 13:51:21.459: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123867218s May 26 13:51:23.463: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.127922451s May 26 13:51:25.467: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.131391806s May 26 13:51:27.470: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.134706042s May 26 13:51:29.473: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.137694195s May 26 13:51:31.476: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.140790634s May 26 13:51:33.641: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.305783459s May 26 13:51:35.644: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.309149176s May 26 13:51:38.046: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.710777573s May 26 13:51:40.049: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 26.714071044s May 26 13:51:42.052: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.717072455s May 26 13:51:44.055: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.71991411s May 26 13:51:46.059: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Running", Reason="", readiness=true. Elapsed: 32.724047728s May 26 13:51:48.605: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Running", Reason="", readiness=false. Elapsed: 35.270087071s May 26 13:51:50.682: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Running", Reason="", readiness=false. Elapsed: 37.346550385s May 26 13:51:52.685: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Running", Reason="", readiness=false. Elapsed: 39.349919657s May 26 13:51:55.546: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Running", Reason="", readiness=false. Elapsed: 42.210575199s May 26 13:51:57.955: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Running", Reason="", readiness=false. Elapsed: 44.620125315s May 26 13:52:01.614: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Running", Reason="", readiness=false. Elapsed: 48.279228115s May 26 13:52:03.797: INFO: Pod "pod-subpath-test-secret-pfg6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.462112308s STEP: Saw pod success May 26 13:52:03.797: INFO: Pod "pod-subpath-test-secret-pfg6" satisfied condition "success or failure" May 26 13:52:03.802: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-pfg6 container test-container-subpath-secret-pfg6: STEP: delete the pod May 26 13:52:05.180: INFO: Waiting for pod pod-subpath-test-secret-pfg6 to disappear May 26 13:52:05.501: INFO: Pod pod-subpath-test-secret-pfg6 no longer exists STEP: Deleting pod pod-subpath-test-secret-pfg6 May 26 13:52:05.501: INFO: Deleting pod "pod-subpath-test-secret-pfg6" in namespace "e2e-tests-subpath-852pt" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:52:05.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-852pt" for this suite. May 26 13:52:13.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:52:13.554: INFO: namespace: e2e-tests-subpath-852pt, resource: bindings, ignored listing per whitelist May 26 13:52:13.578: INFO: namespace e2e-tests-subpath-852pt deletion completed in 8.072010762s • [SLOW TEST:60.413 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:52:13.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0526 13:52:44.193649 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 13:52:44.193: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:52:44.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-pphqj" for this suite. May 26 13:52:50.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:52:50.288: INFO: namespace: e2e-tests-gc-pphqj, resource: bindings, ignored listing per whitelist May 26 13:52:50.290: INFO: namespace e2e-tests-gc-pphqj deletion completed in 6.09411218s • [SLOW TEST:36.712 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:52:50.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-30437a71-9f58-11ea-b1d1-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-30437aa8-9f58-11ea-b1d1-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-30437a71-9f58-11ea-b1d1-0242ac110018 STEP: Updating configmap cm-test-opt-upd-30437aa8-9f58-11ea-b1d1-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-30437abe-9f58-11ea-b1d1-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:53:22.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-f8lsc" for this suite. May 26 13:54:00.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:54:00.583: INFO: namespace: e2e-tests-configmap-f8lsc, resource: bindings, ignored listing per whitelist May 26 13:54:00.663: INFO: namespace e2e-tests-configmap-f8lsc deletion completed in 38.106024125s • [SLOW TEST:70.372 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:54:00.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 13:54:00.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-8gv4f" to be "success or failure" May 26 13:54:00.779: INFO: Pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.753766ms May 26 13:54:02.783: INFO: Pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010723869s May 26 13:54:04.786: INFO: Pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014346471s May 26 13:54:06.789: INFO: Pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017365714s May 26 13:54:08.793: INFO: Pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020758767s May 26 13:54:10.796: INFO: Pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024109114s May 26 13:54:12.799: INFO: Pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027067419s May 26 13:54:14.802: INFO: Pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.030504366s May 26 13:54:16.806: INFO: Pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.034272898s STEP: Saw pod success May 26 13:54:16.806: INFO: Pod "downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:54:16.809: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 13:54:16.829: INFO: Waiting for pod downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018 to disappear May 26 13:54:16.887: INFO: Pod downwardapi-volume-5a367a20-9f58-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:54:16.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8gv4f" for this suite. May 26 13:54:22.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:54:22.960: INFO: namespace: e2e-tests-projected-8gv4f, resource: bindings, ignored listing per whitelist May 26 13:54:22.966: INFO: namespace e2e-tests-projected-8gv4f deletion completed in 6.059760025s • [SLOW TEST:22.303 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:54:22.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 26 13:54:23.084: INFO: Waiting up to 5m0s for pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018" in namespace "e2e-tests-downward-api-4v8xc" to be "success or failure" May 26 13:54:23.086: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153162ms May 26 13:54:25.089: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005505762s May 26 13:54:27.092: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00868042s May 26 13:54:29.327: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.242873104s May 26 13:54:31.336: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252129082s May 26 13:54:33.339: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.254943689s May 26 13:54:35.343: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.258961772s May 26 13:54:37.346: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.262647024s May 26 13:54:39.713: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 16.629414855s May 26 13:54:41.732: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 18.648429967s May 26 13:54:43.735: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.651767002s STEP: Saw pod success May 26 13:54:43.735: INFO: Pod "downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:54:43.739: INFO: Trying to get logs from node hunter-worker pod downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018 container dapi-container: STEP: delete the pod May 26 13:54:43.823: INFO: Waiting for pod downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018 to disappear May 26 13:54:43.834: INFO: Pod downward-api-677a17b8-9f58-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:54:43.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4v8xc" for this suite. May 26 13:54:49.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:54:49.928: INFO: namespace: e2e-tests-downward-api-4v8xc, resource: bindings, ignored listing per whitelist May 26 13:54:49.942: INFO: namespace e2e-tests-downward-api-4v8xc deletion completed in 6.105857608s • [SLOW TEST:26.976 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:54:49.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-779e7ac5-9f58-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 13:54:50.154: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-gc9tb" to be "success or failure" May 26 13:54:50.235: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 80.423952ms May 26 13:54:52.238: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08426014s May 26 13:54:54.242: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087863822s May 26 13:54:56.246: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091590529s May 26 13:54:58.250: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095511737s May 26 13:55:00.253: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.099372026s May 26 13:55:02.257: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.10310768s May 26 13:55:04.897: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.743286581s May 26 13:55:06.901: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.74671462s May 26 13:55:09.046: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.891849903s May 26 13:55:11.244: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.089565391s May 26 13:55:13.247: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.093256171s May 26 13:55:15.251: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.096596081s May 26 13:55:17.318: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 27.164405696s May 26 13:55:19.504: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.349745196s May 26 13:55:21.507: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 31.352680613s May 26 13:55:23.510: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.355854456s STEP: Saw pod success May 26 13:55:23.510: INFO: Pod "pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:55:23.512: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 26 13:55:23.572: INFO: Waiting for pod pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018 to disappear May 26 13:55:23.584: INFO: Pod pod-projected-secrets-77a25336-9f58-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:55:23.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gc9tb" for this suite. May 26 13:55:29.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:55:29.617: INFO: namespace: e2e-tests-projected-gc9tb, resource: bindings, ignored listing per whitelist May 26 13:55:29.652: INFO: namespace e2e-tests-projected-gc9tb deletion completed in 6.065347361s • [SLOW TEST:39.710 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:55:29.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 26 13:55:30.350: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-ftcxh" to be "success or failure" May 26 13:55:30.387: INFO: Pod "downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 36.262721ms May 26 13:55:32.390: INFO: Pod "downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03934939s May 26 13:55:34.393: INFO: Pod "downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042707876s May 26 13:55:36.398: INFO: Pod "downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047116995s May 26 13:55:38.400: INFO: Pod "downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049194064s May 26 13:55:40.403: INFO: Pod "downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.052134857s May 26 13:55:42.524: INFO: Pod "downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.17304898s May 26 13:55:44.527: INFO: Pod "downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.176165637s STEP: Saw pod success May 26 13:55:44.527: INFO: Pod "downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:55:44.529: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018 container client-container: STEP: delete the pod May 26 13:55:44.568: INFO: Waiting for pod downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018 to disappear May 26 13:55:44.621: INFO: Pod downwardapi-volume-8f519419-9f58-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:55:44.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ftcxh" for this suite. May 26 13:55:50.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:55:50.733: INFO: namespace: e2e-tests-projected-ftcxh, resource: bindings, ignored listing per whitelist May 26 13:55:50.752: INFO: namespace e2e-tests-projected-ftcxh deletion completed in 6.12782008s • [SLOW TEST:21.100 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:55:50.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-9bd0e94f-9f58-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume secrets May 26 13:55:50.939: INFO: Waiting up to 5m0s for pod "pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018" in namespace "e2e-tests-secrets-9vckq" to be "success or failure" May 26 13:55:50.950: INFO: Pod "pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.304156ms May 26 13:55:52.954: INFO: Pod "pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014470437s May 26 13:55:54.957: INFO: Pod "pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017807925s May 26 13:55:56.960: INFO: Pod "pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020999828s May 26 13:55:58.964: INFO: Pod "pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024498873s May 26 13:56:00.966: INFO: Pod "pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027199363s May 26 13:56:02.969: INFO: Pod "pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.029927091s STEP: Saw pod success May 26 13:56:02.969: INFO: Pod "pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:56:02.971: INFO: Trying to get logs from node hunter-worker pod pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018 container secret-volume-test: STEP: delete the pod May 26 13:56:02.987: INFO: Waiting for pod pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018 to disappear May 26 13:56:03.032: INFO: Pod pod-secrets-9bde7749-9f58-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:56:03.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9vckq" for this suite. May 26 13:56:09.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:56:09.100: INFO: namespace: e2e-tests-secrets-9vckq, resource: bindings, ignored listing per whitelist May 26 13:56:09.122: INFO: namespace e2e-tests-secrets-9vckq deletion completed in 6.086545159s STEP: Destroying namespace "e2e-tests-secret-namespace-k89nd" for this suite. May 26 13:56:15.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:56:15.205: INFO: namespace: e2e-tests-secret-namespace-k89nd, resource: bindings, ignored listing per whitelist May 26 13:56:15.217: INFO: namespace e2e-tests-secret-namespace-k89nd deletion completed in 6.095639035s • [SLOW TEST:24.465 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:56:15.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nvg85 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-nvg85 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-nvg85 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-nvg85 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-nvg85 May 26 13:56:31.440: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nvg85, name: ss-0, uid: ae0d35c4-9f58-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 26 13:56:31.444: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nvg85, name: ss-0, uid: ae0d35c4-9f58-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 26 13:56:31.577: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-nvg85 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-nvg85 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-nvg85 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 26 13:56:51.799: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nvg85 May 26 13:56:51.802: INFO: Scaling statefulset ss to 0 May 26 13:57:01.830: INFO: Waiting for statefulset status.replicas updated to 0 May 26 13:57:01.833: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:57:01.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nvg85" for this suite. May 26 13:57:07.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:57:07.875: INFO: namespace: e2e-tests-statefulset-nvg85, resource: bindings, ignored listing per whitelist May 26 13:57:07.956: INFO: namespace e2e-tests-statefulset-nvg85 deletion completed in 6.105069112s • [SLOW TEST:52.738 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:57:07.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c9d2fcae-9f58-11ea-b1d1-0242ac110018 STEP: Creating a pod to test consume configMaps May 26 13:57:08.050: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018" in namespace "e2e-tests-configmap-9q4j6" to be "success or failure" May 26 13:57:08.066: INFO: Pod "pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.888215ms May 26 13:57:10.070: INFO: Pod "pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019521541s May 26 13:57:12.073: INFO: Pod "pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023014766s May 26 13:57:14.076: INFO: Pod "pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02588387s May 26 13:57:16.079: INFO: Pod "pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029160037s May 26 13:57:18.082: INFO: Pod "pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.032188006s May 26 13:57:20.086: INFO: Pod "pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.035586394s May 26 13:57:22.089: INFO: Pod "pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.03894695s STEP: Saw pod success May 26 13:57:22.089: INFO: Pod "pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:57:22.092: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018 container configmap-volume-test: STEP: delete the pod May 26 13:57:22.131: INFO: Waiting for pod pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018 to disappear May 26 13:57:22.138: INFO: Pod pod-configmaps-c9d54bb9-9f58-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:57:22.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9q4j6" for this suite. May 26 13:57:28.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:57:28.174: INFO: namespace: e2e-tests-configmap-9q4j6, resource: bindings, ignored listing per whitelist May 26 13:57:28.238: INFO: namespace e2e-tests-configmap-9q4j6 deletion completed in 6.096771081s • [SLOW TEST:20.282 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 26 13:57:28.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-d5f05f78-9f58-11ea-b1d1-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-d5f05f38-9f58-11ea-b1d1-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin May 26 13:57:28.361: INFO: Waiting up to 5m0s for pod "projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018" in namespace "e2e-tests-projected-zmrvh" to be "success or failure" May 26 13:57:28.377: INFO: Pod "projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.493133ms May 26 13:57:30.381: INFO: Pod "projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019974232s May 26 13:57:32.384: INFO: Pod "projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023709744s May 26 13:57:34.388: INFO: Pod "projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027081249s May 26 13:57:36.391: INFO: Pod "projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03023285s May 26 13:57:38.394: INFO: Pod "projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03385216s May 26 13:57:40.398: INFO: Pod "projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.037710316s STEP: Saw pod success May 26 13:57:40.398: INFO: Pod "projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018" satisfied condition "success or failure" May 26 13:57:40.401: INFO: Trying to get logs from node hunter-worker pod projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018 container projected-all-volume-test: STEP: delete the pod May 26 13:57:40.421: INFO: Waiting for pod projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018 to disappear May 26 13:57:40.492: INFO: Pod projected-volume-d5f05eb6-9f58-11ea-b1d1-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 26 13:57:40.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zmrvh" for this suite. May 26 13:57:46.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 26 13:57:46.526: INFO: namespace: e2e-tests-projected-zmrvh, resource: bindings, ignored listing per whitelist May 26 13:57:46.567: INFO: namespace e2e-tests-projected-zmrvh deletion completed in 6.071971179s • [SLOW TEST:18.329 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSMay 26 13:57:46.568: INFO: Running AfterSuite actions on all nodes May 26 13:57:46.568: INFO: Running AfterSuite actions on node 1 May 26 13:57:46.568: INFO: Skipping dumping logs from cluster Summarizing 2 Failures: [Fail] [k8s.io] KubeletManagedEtcHosts [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110 [Fail] [sig-apps] Daemon set [Serial] [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:272 Ran 200 of 2164 Specs in 11452.537 seconds FAIL! -- 198 Passed | 2 Failed | 0 Pending | 1964 Skipped --- FAIL: TestE2E (11452.74s) FAIL