I0130 10:47:15.751299 8 e2e.go:224] Starting e2e run "e0b652bb-434d-11ea-a47a-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580381234 - Will randomize all specs Will run 201 of 2164 specs Jan 30 10:47:16.286: INFO: >>> kubeConfig: /root/.kube/config Jan 30 10:47:16.297: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 30 10:47:16.332: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 30 10:47:16.377: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 30 10:47:16.377: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 30 10:47:16.377: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 30 10:47:16.390: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 30 10:47:16.390: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 30 10:47:16.390: INFO: e2e test version: v1.13.12 Jan 30 10:47:16.392: INFO: kube-apiserver version: v1.13.8 SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:47:16.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services Jan 30 10:47:16.704: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:47:16.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-qktrp" for this suite. Jan 30 10:47:22.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:47:22.870: INFO: namespace: e2e-tests-services-qktrp, resource: bindings, ignored listing per whitelist Jan 30 10:47:22.902: INFO: namespace e2e-tests-services-qktrp deletion completed in 6.179698627s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.510 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:47:22.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-n46h6 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jan 30 10:47:23.207: INFO: Found 0 stateful pods, waiting for 3 Jan 30 10:47:33.224: INFO: Found 1 stateful pods, waiting for 3 Jan 30 10:47:43.235: INFO: Found 2 stateful pods, waiting for 3 Jan 30 10:47:53.231: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 10:47:53.231: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 10:47:53.231: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 30 10:47:53.290: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 30 10:48:03.430: INFO: Updating stateful set ss2 Jan 30 10:48:03.555: INFO: Waiting for Pod e2e-tests-statefulset-n46h6/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 30 10:48:16.667: INFO: Found 2 stateful pods, waiting for 3 Jan 30 10:48:26.691: INFO: Found 2 stateful pods, waiting for 3 Jan 30 10:48:36.729: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 10:48:36.729: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 10:48:36.729: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 10:48:46.691: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 10:48:46.691: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 10:48:46.691: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 30 10:48:46.752: INFO: Updating stateful set ss2 Jan 30 10:48:46.818: INFO: Waiting for Pod e2e-tests-statefulset-n46h6/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 30 10:48:56.908: INFO: Updating stateful set ss2 Jan 30 10:48:56.972: INFO: Waiting for StatefulSet e2e-tests-statefulset-n46h6/ss2 to complete update Jan 30 10:48:56.972: INFO: Waiting for Pod e2e-tests-statefulset-n46h6/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 30 10:49:07.002: INFO: Waiting for StatefulSet e2e-tests-statefulset-n46h6/ss2 to complete update Jan 30 10:49:07.003: INFO: Waiting for Pod e2e-tests-statefulset-n46h6/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 30 10:49:17.014: INFO: Waiting for StatefulSet e2e-tests-statefulset-n46h6/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 30 10:49:27.005: INFO: Deleting all statefulset in ns e2e-tests-statefulset-n46h6 Jan 30 10:49:27.010: INFO: Scaling statefulset ss2 to 0 Jan 30 10:49:57.095: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 10:49:57.104: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:49:57.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-n46h6" for this suite. Jan 30 10:50:05.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:50:05.445: INFO: namespace: e2e-tests-statefulset-n46h6, resource: bindings, ignored listing per whitelist Jan 30 10:50:05.508: INFO: namespace e2e-tests-statefulset-n46h6 deletion completed in 8.27954158s • [SLOW TEST:162.605 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:50:05.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 30 10:50:05.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-vgrvr' Jan 30 10:50:07.748: INFO: stderr: "" Jan 30 10:50:07.749: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jan 30 10:50:07.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vgrvr' Jan 30 10:50:14.437: INFO: stderr: "" Jan 30 10:50:14.437: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:50:14.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vgrvr" for this suite. Jan 30 10:50:20.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:50:20.667: INFO: namespace: e2e-tests-kubectl-vgrvr, resource: bindings, ignored listing per whitelist Jan 30 10:50:20.838: INFO: namespace e2e-tests-kubectl-vgrvr deletion completed in 6.387872273s • [SLOW TEST:15.331 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:50:20.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-4fa13785-434e-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 30 10:50:21.157: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-t8hqd" to be "success or failure" Jan 30 10:50:21.171: INFO: Pod "pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.796687ms Jan 30 10:50:23.385: INFO: Pod "pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228269295s Jan 30 10:50:25.399: INFO: Pod "pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242253385s Jan 30 10:50:27.414: INFO: Pod "pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257170022s Jan 30 10:50:29.436: INFO: Pod "pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278690073s Jan 30 10:50:31.449: INFO: Pod "pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.292234839s STEP: Saw pod success Jan 30 10:50:31.449: INFO: Pod "pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 10:50:31.454: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 30 10:50:32.085: INFO: Waiting for pod pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005 to disappear Jan 30 10:50:32.271: INFO: Pod pod-projected-secrets-4fa21e3f-434e-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:50:32.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t8hqd" for this suite. Jan 30 10:50:38.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:50:38.451: INFO: namespace: e2e-tests-projected-t8hqd, resource: bindings, ignored listing per whitelist Jan 30 10:50:38.684: INFO: namespace e2e-tests-projected-t8hqd deletion completed in 6.390807666s • [SLOW TEST:17.844 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:50:38.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 30 10:50:38.919: INFO: Waiting up to 5m0s for pod "pod-5a433029-434e-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-k8p24" to be "success or failure" Jan 30 10:50:38.933: INFO: Pod "pod-5a433029-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.801553ms Jan 30 10:50:41.003: INFO: Pod "pod-5a433029-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083505975s Jan 30 10:50:43.343: INFO: Pod "pod-5a433029-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42360491s Jan 30 10:50:45.415: INFO: Pod "pod-5a433029-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.495133398s Jan 30 10:50:47.451: INFO: Pod "pod-5a433029-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531520145s Jan 30 10:50:49.489: INFO: Pod "pod-5a433029-434e-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.569673278s STEP: Saw pod success Jan 30 10:50:49.489: INFO: Pod "pod-5a433029-434e-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 10:50:49.495: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5a433029-434e-11ea-a47a-0242ac110005 container test-container: STEP: delete the pod Jan 30 10:50:50.387: INFO: Waiting for pod pod-5a433029-434e-11ea-a47a-0242ac110005 to disappear Jan 30 10:50:50.413: INFO: Pod pod-5a433029-434e-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:50:50.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-k8p24" for this suite. Jan 30 10:50:56.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:50:56.694: INFO: namespace: e2e-tests-emptydir-k8p24, resource: bindings, ignored listing per whitelist Jan 30 10:50:56.757: INFO: namespace e2e-tests-emptydir-k8p24 deletion completed in 6.324226978s • [SLOW TEST:18.073 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:50:56.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 30 10:50:57.037: INFO: Waiting up to 5m0s for pod "pod-650de4a3-434e-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-5zfgp" to be "success or failure" Jan 30 10:50:57.095: INFO: Pod "pod-650de4a3-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 58.03106ms Jan 30 10:50:59.116: INFO: Pod "pod-650de4a3-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078366229s Jan 30 10:51:01.140: INFO: Pod "pod-650de4a3-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103141734s Jan 30 10:51:03.161: INFO: Pod "pod-650de4a3-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123246156s Jan 30 10:51:05.208: INFO: Pod "pod-650de4a3-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170371679s Jan 30 10:51:07.226: INFO: Pod "pod-650de4a3-434e-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.18826602s STEP: Saw pod success Jan 30 10:51:07.226: INFO: Pod "pod-650de4a3-434e-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 10:51:07.231: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-650de4a3-434e-11ea-a47a-0242ac110005 container test-container: STEP: delete the pod Jan 30 10:51:07.923: INFO: Waiting for pod pod-650de4a3-434e-11ea-a47a-0242ac110005 to disappear Jan 30 10:51:07.982: INFO: Pod pod-650de4a3-434e-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:51:07.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5zfgp" for this suite. Jan 30 10:51:14.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:51:14.265: INFO: namespace: e2e-tests-emptydir-5zfgp, resource: bindings, ignored listing per whitelist Jan 30 10:51:14.349: INFO: namespace e2e-tests-emptydir-5zfgp deletion completed in 6.35620686s • [SLOW TEST:17.592 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:51:14.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 30 10:51:14.662: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-v2np8" to be "success or failure" Jan 30 10:51:14.781: INFO: Pod "downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 118.112981ms Jan 30 10:51:16.803: INFO: Pod "downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14053537s Jan 30 10:51:18.837: INFO: Pod "downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17398021s Jan 30 10:51:21.407: INFO: Pod "downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.744012968s Jan 30 10:51:23.421: INFO: Pod "downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.758256489s Jan 30 10:51:25.444: INFO: Pod "downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.781039066s STEP: Saw pod success Jan 30 10:51:25.444: INFO: Pod "downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 10:51:25.492: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005 container client-container: STEP: delete the pod Jan 30 10:51:25.566: INFO: Waiting for pod downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005 to disappear Jan 30 10:51:25.583: INFO: Pod downwardapi-volume-6f82e1e1-434e-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:51:25.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v2np8" for this suite. Jan 30 10:51:32.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:51:32.873: INFO: namespace: e2e-tests-downward-api-v2np8, resource: bindings, ignored listing per whitelist Jan 30 10:51:32.995: INFO: namespace e2e-tests-downward-api-v2np8 deletion completed in 6.526928414s • [SLOW TEST:18.644 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:51:32.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 30 10:51:33.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-c44dn' Jan 30 10:51:33.496: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 30 10:51:33.496: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jan 30 10:51:35.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-c44dn' Jan 30 10:51:36.002: INFO: stderr: "" Jan 30 10:51:36.002: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:51:36.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c44dn" for this suite. Jan 30 10:51:42.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:51:42.305: INFO: namespace: e2e-tests-kubectl-c44dn, resource: bindings, ignored listing per whitelist Jan 30 10:51:42.355: INFO: namespace e2e-tests-kubectl-c44dn deletion completed in 6.327330186s • [SLOW TEST:9.359 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:51:42.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 10:51:42.645: INFO: Creating ReplicaSet my-hostname-basic-8040acd1-434e-11ea-a47a-0242ac110005 Jan 30 10:51:42.675: INFO: Pod name my-hostname-basic-8040acd1-434e-11ea-a47a-0242ac110005: Found 0 pods out of 1 Jan 30 10:51:47.687: INFO: Pod name my-hostname-basic-8040acd1-434e-11ea-a47a-0242ac110005: Found 1 pods out of 1 Jan 30 10:51:47.687: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8040acd1-434e-11ea-a47a-0242ac110005" is running Jan 30 10:51:53.705: INFO: Pod "my-hostname-basic-8040acd1-434e-11ea-a47a-0242ac110005-8k5lw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 10:51:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 10:51:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8040acd1-434e-11ea-a47a-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 10:51:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8040acd1-434e-11ea-a47a-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 10:51:42 +0000 UTC Reason: Message:}]) Jan 30 10:51:53.706: INFO: Trying to dial the pod Jan 30 10:51:58.748: INFO: Controller my-hostname-basic-8040acd1-434e-11ea-a47a-0242ac110005: Got expected result from replica 1 [my-hostname-basic-8040acd1-434e-11ea-a47a-0242ac110005-8k5lw]: "my-hostname-basic-8040acd1-434e-11ea-a47a-0242ac110005-8k5lw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:51:58.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-k8nhn" for this suite. Jan 30 10:52:06.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:52:07.131: INFO: namespace: e2e-tests-replicaset-k8nhn, resource: bindings, ignored listing per whitelist Jan 30 10:52:07.131: INFO: namespace e2e-tests-replicaset-k8nhn deletion completed in 8.376890714s • [SLOW TEST:24.776 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:52:07.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 30 10:52:08.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5j7fb' Jan 30 10:52:08.620: INFO: stderr: "" Jan 30 10:52:08.620: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 30 10:52:18.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5j7fb -o json' Jan 30 10:52:18.858: INFO: stderr: "" Jan 30 10:52:18.859: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-30T10:52:08Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-5j7fb\",\n \"resourceVersion\": \"19957622\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-5j7fb/pods/e2e-test-nginx-pod\",\n \"uid\": \"8fb37641-434e-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-wsj42\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-wsj42\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-wsj42\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-30T10:52:08Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-30T10:52:18Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-30T10:52:18Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-30T10:52:08Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://9252ade0c4b665ca0d17dc2379baf3e77d934d73bf9f0d3610edad8f78869f48\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-30T10:52:17Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-30T10:52:08Z\"\n }\n}\n" STEP: replace the image in the pod Jan 30 10:52:18.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-5j7fb' Jan 30 10:52:19.467: INFO: stderr: "" Jan 30 10:52:19.467: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jan 30 10:52:19.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5j7fb' Jan 30 10:52:27.325: INFO: stderr: "" Jan 30 10:52:27.326: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:52:27.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5j7fb" for this suite. Jan 30 10:52:33.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:52:33.459: INFO: namespace: e2e-tests-kubectl-5j7fb, resource: bindings, ignored listing per whitelist Jan 30 10:52:33.606: INFO: namespace e2e-tests-kubectl-5j7fb deletion completed in 6.264092717s • [SLOW TEST:26.474 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:52:33.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jan 30 10:52:33.832: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:52:34.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vzj2p" for this suite. Jan 30 10:52:40.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:52:40.273: INFO: namespace: e2e-tests-kubectl-vzj2p, resource: bindings, ignored listing per whitelist Jan 30 10:52:40.406: INFO: namespace e2e-tests-kubectl-vzj2p deletion completed in 6.365196395s • [SLOW TEST:6.799 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:52:40.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4kmt7 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4kmt7;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4kmt7 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4kmt7;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4kmt7.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4kmt7.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4kmt7.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4kmt7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4kmt7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 27.53.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.53.27_udp@PTR;check="$$(dig +tcp +noall +answer +search 27.53.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.53.27_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4kmt7 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4kmt7;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4kmt7 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4kmt7;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4kmt7.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4kmt7.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4kmt7.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4kmt7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4kmt7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 27.53.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.53.27_udp@PTR;check="$$(dig +tcp +noall +answer +search 27.53.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.53.27_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 30 10:52:55.264: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.275: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.282: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-4kmt7 from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.291: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4kmt7 from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.297: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.303: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.310: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.319: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.325: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.337: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.344: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.348: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.353: INFO: Unable to read 10.105.53.27_udp@PTR from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.357: INFO: Unable to read 10.105.53.27_tcp@PTR from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.362: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.368: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.373: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4kmt7 from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.377: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4kmt7 from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.381: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.385: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.390: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.394: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.397: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.403: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.408: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.413: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.418: INFO: Unable to read 10.105.53.27_udp@PTR from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.422: INFO: Unable to read 10.105.53.27_tcp@PTR from pod e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-a2f825e6-434e-11ea-a47a-0242ac110005) Jan 30 10:52:55.422: INFO: Lookups using e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-4kmt7 wheezy_tcp@dns-test-service.e2e-tests-dns-4kmt7 wheezy_udp@dns-test-service.e2e-tests-dns-4kmt7.svc wheezy_tcp@dns-test-service.e2e-tests-dns-4kmt7.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.105.53.27_udp@PTR 10.105.53.27_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4kmt7 jessie_tcp@dns-test-service.e2e-tests-dns-4kmt7 jessie_udp@dns-test-service.e2e-tests-dns-4kmt7.svc jessie_tcp@dns-test-service.e2e-tests-dns-4kmt7.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4kmt7.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4kmt7.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.105.53.27_udp@PTR 10.105.53.27_tcp@PTR] Jan 30 10:53:01.027: INFO: DNS probes using e2e-tests-dns-4kmt7/dns-test-a2f825e6-434e-11ea-a47a-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:53:02.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-4kmt7" for this suite. Jan 30 10:53:08.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:53:09.005: INFO: namespace: e2e-tests-dns-4kmt7, resource: bindings, ignored listing per whitelist Jan 30 10:53:09.155: INFO: namespace e2e-tests-dns-4kmt7 deletion completed in 6.249502345s • [SLOW TEST:28.749 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:53:09.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 30 10:53:09.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-8khnd" to be "success or failure" Jan 30 10:53:09.488: INFO: Pod "downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.288186ms Jan 30 10:53:12.036: INFO: Pod "downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.571454835s Jan 30 10:53:14.062: INFO: Pod "downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.598082967s Jan 30 10:53:16.384: INFO: Pod "downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.919321627s Jan 30 10:53:18.585: INFO: Pod "downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.120484338s Jan 30 10:53:20.615: INFO: Pod "downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.150439653s STEP: Saw pod success Jan 30 10:53:20.615: INFO: Pod "downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 10:53:20.623: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005 container client-container: STEP: delete the pod Jan 30 10:53:20.869: INFO: Waiting for pod downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005 to disappear Jan 30 10:53:20.881: INFO: Pod downwardapi-volume-b3fd43d1-434e-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:53:20.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8khnd" for this suite. Jan 30 10:53:26.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:53:27.089: INFO: namespace: e2e-tests-downward-api-8khnd, resource: bindings, ignored listing per whitelist Jan 30 10:53:27.114: INFO: namespace e2e-tests-downward-api-8khnd deletion completed in 6.223427504s • [SLOW TEST:17.958 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:53:27.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jan 30 10:53:27.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8js99' Jan 30 10:53:27.725: INFO: stderr: "" Jan 30 10:53:27.725: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jan 30 10:53:29.339: INFO: Selector matched 1 pods for map[app:redis] Jan 30 10:53:29.339: INFO: Found 0 / 1 Jan 30 10:53:29.771: INFO: Selector matched 1 pods for map[app:redis] Jan 30 10:53:29.772: INFO: Found 0 / 1 Jan 30 10:53:30.746: INFO: Selector matched 1 pods for map[app:redis] Jan 30 10:53:30.746: INFO: Found 0 / 1 Jan 30 10:53:31.782: INFO: Selector matched 1 pods for map[app:redis] Jan 30 10:53:31.782: INFO: Found 0 / 1 Jan 30 10:53:32.968: INFO: Selector matched 1 pods for map[app:redis] Jan 30 10:53:32.969: INFO: Found 0 / 1 Jan 30 10:53:33.845: INFO: Selector matched 1 pods for map[app:redis] Jan 30 10:53:33.846: INFO: Found 0 / 1 Jan 30 10:53:34.764: INFO: Selector matched 1 pods for map[app:redis] Jan 30 10:53:34.764: INFO: Found 0 / 1 Jan 30 10:53:35.750: INFO: Selector matched 1 pods for map[app:redis] Jan 30 10:53:35.750: INFO: Found 0 / 1 Jan 30 10:53:36.742: INFO: Selector matched 1 pods for map[app:redis] Jan 30 10:53:36.742: INFO: Found 1 / 1 Jan 30 10:53:36.742: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 30 10:53:36.748: INFO: Selector matched 1 pods for map[app:redis] Jan 30 10:53:36.748: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 30 10:53:36.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7skzl redis-master --namespace=e2e-tests-kubectl-8js99' Jan 30 10:53:36.957: INFO: stderr: "" Jan 30 10:53:36.957: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 30 Jan 10:53:35.645 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Jan 10:53:35.645 # Server started, Redis version 3.2.12\n1:M 30 Jan 10:53:35.646 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Jan 10:53:35.646 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 30 10:53:36.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7skzl redis-master --namespace=e2e-tests-kubectl-8js99 --tail=1' Jan 30 10:53:37.115: INFO: stderr: "" Jan 30 10:53:37.115: INFO: stdout: "1:M 30 Jan 10:53:35.646 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 30 10:53:37.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7skzl redis-master --namespace=e2e-tests-kubectl-8js99 --limit-bytes=1' Jan 30 10:53:37.274: INFO: stderr: "" Jan 30 10:53:37.274: INFO: stdout: " " STEP: exposing timestamps Jan 30 10:53:37.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7skzl redis-master --namespace=e2e-tests-kubectl-8js99 --tail=1 --timestamps' Jan 30 10:53:37.402: INFO: stderr: "" Jan 30 10:53:37.402: INFO: stdout: "2020-01-30T10:53:35.647900375Z 1:M 30 Jan 10:53:35.646 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 30 10:53:39.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7skzl redis-master --namespace=e2e-tests-kubectl-8js99 --since=1s' Jan 30 10:53:40.153: INFO: stderr: "" Jan 30 10:53:40.153: INFO: stdout: "" Jan 30 10:53:40.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7skzl redis-master --namespace=e2e-tests-kubectl-8js99 --since=24h' Jan 30 10:53:40.387: INFO: stderr: "" Jan 30 10:53:40.387: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 30 Jan 10:53:35.645 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Jan 10:53:35.645 # Server started, Redis version 3.2.12\n1:M 30 Jan 10:53:35.646 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Jan 10:53:35.646 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jan 30 10:53:40.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8js99' Jan 30 10:53:40.909: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 10:53:40.910: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 30 10:53:40.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-8js99' Jan 30 10:53:41.102: INFO: stderr: "No resources found.\n" Jan 30 10:53:41.103: INFO: stdout: "" Jan 30 10:53:41.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-8js99 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 30 10:53:41.261: INFO: stderr: "" Jan 30 10:53:41.261: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:53:41.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8js99" for this suite. Jan 30 10:53:47.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:53:47.508: INFO: namespace: e2e-tests-kubectl-8js99, resource: bindings, ignored listing per whitelist Jan 30 10:53:47.601: INFO: namespace e2e-tests-kubectl-8js99 deletion completed in 6.25157311s • [SLOW TEST:20.487 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:53:47.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 10:53:47.797: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:53:58.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bltnb" for this suite. Jan 30 10:54:44.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:54:44.513: INFO: namespace: e2e-tests-pods-bltnb, resource: bindings, ignored listing per whitelist Jan 30 10:54:44.654: INFO: namespace e2e-tests-pods-bltnb deletion completed in 46.306608768s • [SLOW TEST:57.053 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:54:44.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dbtgl STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 30 10:54:44.886: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 30 10:55:19.122: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-dbtgl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 10:55:19.122: INFO: >>> kubeConfig: /root/.kube/config I0130 10:55:19.224708 8 log.go:172] (0xc000d4e2c0) (0xc001525ae0) Create stream I0130 10:55:19.225031 8 log.go:172] (0xc000d4e2c0) (0xc001525ae0) Stream added, broadcasting: 1 I0130 10:55:19.238834 8 log.go:172] (0xc000d4e2c0) Reply frame received for 1 I0130 10:55:19.239119 8 log.go:172] (0xc000d4e2c0) (0xc001c79360) Create stream I0130 10:55:19.239178 8 log.go:172] (0xc000d4e2c0) (0xc001c79360) Stream added, broadcasting: 3 I0130 10:55:19.243308 8 log.go:172] (0xc000d4e2c0) Reply frame received for 3 I0130 10:55:19.243340 8 log.go:172] (0xc000d4e2c0) (0xc001a69360) Create stream I0130 10:55:19.243351 8 log.go:172] (0xc000d4e2c0) (0xc001a69360) Stream added, broadcasting: 5 I0130 10:55:19.246528 8 log.go:172] (0xc000d4e2c0) Reply frame received for 5 I0130 10:55:19.415871 8 log.go:172] (0xc000d4e2c0) Data frame received for 3 I0130 10:55:19.415980 8 log.go:172] (0xc001c79360) (3) Data frame handling I0130 10:55:19.416009 8 log.go:172] (0xc001c79360) (3) Data frame sent I0130 10:55:19.531162 8 log.go:172] (0xc000d4e2c0) Data frame received for 1 I0130 10:55:19.531422 8 log.go:172] (0xc000d4e2c0) (0xc001c79360) Stream removed, broadcasting: 3 I0130 10:55:19.531502 8 log.go:172] (0xc001525ae0) (1) Data frame handling I0130 10:55:19.531552 8 log.go:172] (0xc001525ae0) (1) Data frame sent I0130 10:55:19.531615 8 log.go:172] (0xc000d4e2c0) (0xc001a69360) Stream removed, broadcasting: 5 I0130 10:55:19.531787 8 log.go:172] (0xc000d4e2c0) (0xc001525ae0) Stream removed, broadcasting: 1 I0130 10:55:19.531880 8 log.go:172] (0xc000d4e2c0) Go away received I0130 10:55:19.532986 8 log.go:172] (0xc000d4e2c0) (0xc001525ae0) Stream removed, broadcasting: 1 I0130 10:55:19.533023 8 log.go:172] (0xc000d4e2c0) (0xc001c79360) Stream removed, broadcasting: 3 I0130 10:55:19.533043 8 log.go:172] (0xc000d4e2c0) (0xc001a69360) Stream removed, broadcasting: 5 Jan 30 10:55:19.533: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:55:19.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-dbtgl" for this suite. Jan 30 10:55:43.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:55:43.719: INFO: namespace: e2e-tests-pod-network-test-dbtgl, resource: bindings, ignored listing per whitelist Jan 30 10:55:43.792: INFO: namespace e2e-tests-pod-network-test-dbtgl deletion completed in 24.241870945s • [SLOW TEST:59.138 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:55:43.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jan 30 10:55:44.076: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-p8plx" to be "success or failure" Jan 30 10:55:44.089: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.97507ms Jan 30 10:55:46.171: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093681057s Jan 30 10:55:48.185: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108526189s Jan 30 10:55:50.928: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.851085646s Jan 30 10:55:53.162: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.085369279s Jan 30 10:55:55.202: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.125471736s Jan 30 10:55:57.214: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.137485686s STEP: Saw pod success Jan 30 10:55:57.215: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 30 10:55:57.218: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 30 10:55:58.492: INFO: Waiting for pod pod-host-path-test to disappear Jan 30 10:55:58.527: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:55:58.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-p8plx" for this suite. Jan 30 10:56:04.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:56:04.860: INFO: namespace: e2e-tests-hostpath-p8plx, resource: bindings, ignored listing per whitelist Jan 30 10:56:04.923: INFO: namespace e2e-tests-hostpath-p8plx deletion completed in 6.362563381s • [SLOW TEST:21.129 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:56:04.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 30 10:56:05.084: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-4zvg6" to be "success or failure" Jan 30 10:56:05.095: INFO: Pod "downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.853337ms Jan 30 10:56:07.108: INFO: Pod "downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023245332s Jan 30 10:56:09.134: INFO: Pod "downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048927439s Jan 30 10:56:11.150: INFO: Pod "downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065255883s Jan 30 10:56:13.411: INFO: Pod "downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326348925s Jan 30 10:56:15.437: INFO: Pod "downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.352273731s STEP: Saw pod success Jan 30 10:56:15.437: INFO: Pod "downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 10:56:15.448: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005 container client-container: STEP: delete the pod Jan 30 10:56:15.832: INFO: Waiting for pod downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005 to disappear Jan 30 10:56:15.923: INFO: Pod downwardapi-volume-1cac4a23-434f-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:56:15.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4zvg6" for this suite. Jan 30 10:56:21.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:56:22.097: INFO: namespace: e2e-tests-projected-4zvg6, resource: bindings, ignored listing per whitelist Jan 30 10:56:22.139: INFO: namespace e2e-tests-projected-4zvg6 deletion completed in 6.201509856s • [SLOW TEST:17.216 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:56:22.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0130 10:56:32.473914 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 10:56:32.474: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:56:32.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ddpn9" for this suite. Jan 30 10:56:38.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:56:38.926: INFO: namespace: e2e-tests-gc-ddpn9, resource: bindings, ignored listing per whitelist Jan 30 10:56:38.935: INFO: namespace e2e-tests-gc-ddpn9 deletion completed in 6.446021833s • [SLOW TEST:16.795 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:56:38.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-31095d19-434f-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 30 10:56:39.256: INFO: Waiting up to 5m0s for pod "pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005" in namespace "e2e-tests-configmap-227b5" to be "success or failure" Jan 30 10:56:39.269: INFO: Pod "pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.529237ms Jan 30 10:56:41.426: INFO: Pod "pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169667113s Jan 30 10:56:43.443: INFO: Pod "pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187000638s Jan 30 10:56:45.643: INFO: Pod "pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386441113s Jan 30 10:56:47.675: INFO: Pod "pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.418922382s Jan 30 10:56:49.709: INFO: Pod "pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.452747203s STEP: Saw pod success Jan 30 10:56:49.709: INFO: Pod "pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 10:56:49.718: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 30 10:56:50.623: INFO: Waiting for pod pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005 to disappear Jan 30 10:56:50.642: INFO: Pod pod-configmaps-310a854c-434f-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:56:50.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-227b5" for this suite. Jan 30 10:56:56.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:56:57.037: INFO: namespace: e2e-tests-configmap-227b5, resource: bindings, ignored listing per whitelist Jan 30 10:56:57.042: INFO: namespace e2e-tests-configmap-227b5 deletion completed in 6.372956809s • [SLOW TEST:18.107 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:56:57.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-3bdfdbe8-434f-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 30 10:56:57.585: INFO: Waiting up to 5m0s for pod "pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005" in namespace "e2e-tests-secrets-rbx5l" to be "success or failure" Jan 30 10:56:57.613: INFO: Pod "pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.785153ms Jan 30 10:56:59.796: INFO: Pod "pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210215054s Jan 30 10:57:01.822: INFO: Pod "pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236993161s Jan 30 10:57:03.845: INFO: Pod "pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259286448s Jan 30 10:57:06.226: INFO: Pod "pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.640523917s Jan 30 10:57:08.251: INFO: Pod "pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.665562477s STEP: Saw pod success Jan 30 10:57:08.251: INFO: Pod "pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 10:57:08.265: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 30 10:57:08.519: INFO: Waiting for pod pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005 to disappear Jan 30 10:57:08.597: INFO: Pod pod-secrets-3be0fdfc-434f-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:57:08.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rbx5l" for this suite. Jan 30 10:57:16.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:57:16.765: INFO: namespace: e2e-tests-secrets-rbx5l, resource: bindings, ignored listing per whitelist Jan 30 10:57:16.832: INFO: namespace e2e-tests-secrets-rbx5l deletion completed in 8.213863338s • [SLOW TEST:19.790 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:57:16.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-qcfb5 Jan 30 10:57:27.513: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-qcfb5 STEP: checking the pod's current state and verifying that restartCount is present Jan 30 10:57:27.521: INFO: Initial restart count of pod liveness-http is 0 Jan 30 10:57:44.313: INFO: Restart count of pod e2e-tests-container-probe-qcfb5/liveness-http is now 1 (16.792269485s elapsed) Jan 30 10:58:04.752: INFO: Restart count of pod e2e-tests-container-probe-qcfb5/liveness-http is now 2 (37.231124564s elapsed) Jan 30 10:58:24.998: INFO: Restart count of pod e2e-tests-container-probe-qcfb5/liveness-http is now 3 (57.477110921s elapsed) Jan 30 10:58:45.948: INFO: Restart count of pod e2e-tests-container-probe-qcfb5/liveness-http is now 4 (1m18.426385432s elapsed) Jan 30 10:59:52.791: INFO: Restart count of pod e2e-tests-container-probe-qcfb5/liveness-http is now 5 (2m25.269396731s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 10:59:52.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-qcfb5" for this suite. Jan 30 10:59:59.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 10:59:59.231: INFO: namespace: e2e-tests-container-probe-qcfb5, resource: bindings, ignored listing per whitelist Jan 30 10:59:59.285: INFO: namespace e2e-tests-container-probe-qcfb5 deletion completed in 6.299698655s • [SLOW TEST:162.453 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 10:59:59.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 30 10:59:59.507: INFO: Waiting up to 5m0s for pod "downward-api-a8652827-434f-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-blzr9" to be "success or failure" Jan 30 10:59:59.526: INFO: Pod "downward-api-a8652827-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.377704ms Jan 30 11:00:01.696: INFO: Pod "downward-api-a8652827-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188833186s Jan 30 11:00:03.714: INFO: Pod "downward-api-a8652827-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206640803s Jan 30 11:00:05.822: INFO: Pod "downward-api-a8652827-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.315199415s Jan 30 11:00:07.836: INFO: Pod "downward-api-a8652827-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329198708s Jan 30 11:00:09.851: INFO: Pod "downward-api-a8652827-434f-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.343519349s STEP: Saw pod success Jan 30 11:00:09.851: INFO: Pod "downward-api-a8652827-434f-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:00:09.858: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-a8652827-434f-11ea-a47a-0242ac110005 container dapi-container: STEP: delete the pod Jan 30 11:00:10.664: INFO: Waiting for pod downward-api-a8652827-434f-11ea-a47a-0242ac110005 to disappear Jan 30 11:00:10.681: INFO: Pod downward-api-a8652827-434f-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:00:10.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-blzr9" for this suite. Jan 30 11:00:16.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:00:16.807: INFO: namespace: e2e-tests-downward-api-blzr9, resource: bindings, ignored listing per whitelist Jan 30 11:00:16.853: INFO: namespace e2e-tests-downward-api-blzr9 deletion completed in 6.157368833s • [SLOW TEST:17.568 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:00:16.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-b2e13bfb-434f-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 30 11:00:17.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-hhgbn" to be "success or failure" Jan 30 11:00:17.100: INFO: Pod "pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.88796ms Jan 30 11:00:19.122: INFO: Pod "pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027733044s Jan 30 11:00:21.154: INFO: Pod "pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059458575s Jan 30 11:00:23.185: INFO: Pod "pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090228562s Jan 30 11:00:25.197: INFO: Pod "pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102465816s Jan 30 11:00:27.216: INFO: Pod "pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121366252s STEP: Saw pod success Jan 30 11:00:27.216: INFO: Pod "pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:00:27.225: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 30 11:00:27.443: INFO: Waiting for pod pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005 to disappear Jan 30 11:00:27.561: INFO: Pod pod-projected-configmaps-b2e1d176-434f-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:00:27.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hhgbn" for this suite. Jan 30 11:00:33.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:00:33.672: INFO: namespace: e2e-tests-projected-hhgbn, resource: bindings, ignored listing per whitelist Jan 30 11:00:33.908: INFO: namespace e2e-tests-projected-hhgbn deletion completed in 6.329659103s • [SLOW TEST:17.054 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:00:33.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:01:06.262: INFO: Container started at 2020-01-30 11:00:42 +0000 UTC, pod became ready at 2020-01-30 11:01:06 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:01:06.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jq8rs" for this suite. Jan 30 11:01:30.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:01:30.342: INFO: namespace: e2e-tests-container-probe-jq8rs, resource: bindings, ignored listing per whitelist Jan 30 11:01:30.489: INFO: namespace e2e-tests-container-probe-jq8rs deletion completed in 24.220930029s • [SLOW TEST:56.580 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:01:30.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jan 30 11:01:30.765: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 30 11:01:30.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:01:33.397: INFO: stderr: "" Jan 30 11:01:33.397: INFO: stdout: "service/redis-slave created\n" Jan 30 11:01:33.399: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 30 11:01:33.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:01:34.066: INFO: stderr: "" Jan 30 11:01:34.066: INFO: stdout: "service/redis-master created\n" Jan 30 11:01:34.067: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 30 11:01:34.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:01:34.723: INFO: stderr: "" Jan 30 11:01:34.724: INFO: stdout: "service/frontend created\n" Jan 30 11:01:34.725: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 30 11:01:34.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:01:35.213: INFO: stderr: "" Jan 30 11:01:35.214: INFO: stdout: "deployment.extensions/frontend created\n" Jan 30 11:01:35.216: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 30 11:01:35.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:01:35.732: INFO: stderr: "" Jan 30 11:01:35.732: INFO: stdout: "deployment.extensions/redis-master created\n" Jan 30 11:01:35.734: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 30 11:01:35.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:01:36.448: INFO: stderr: "" Jan 30 11:01:36.448: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jan 30 11:01:36.448: INFO: Waiting for all frontend pods to be Running. Jan 30 11:02:06.502: INFO: Waiting for frontend to serve content. Jan 30 11:02:06.728: INFO: Trying to add a new entry to the guestbook. Jan 30 11:02:06.805: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 30 11:02:06.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:02:07.204: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 11:02:07.205: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 30 11:02:07.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:02:07.405: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 11:02:07.406: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 30 11:02:07.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:02:07.603: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 11:02:07.603: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 30 11:02:07.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:02:07.727: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 11:02:07.727: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 30 11:02:07.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:02:08.069: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 11:02:08.070: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 30 11:02:08.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4znb5' Jan 30 11:02:08.435: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 11:02:08.435: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:02:08.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4znb5" for this suite. Jan 30 11:02:54.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:02:54.963: INFO: namespace: e2e-tests-kubectl-4znb5, resource: bindings, ignored listing per whitelist Jan 30 11:02:55.001: INFO: namespace e2e-tests-kubectl-4znb5 deletion completed in 46.503104275s • [SLOW TEST:84.511 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:02:55.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jan 30 11:02:55.223: INFO: Waiting up to 5m0s for pod "client-containers-11216c7c-4350-11ea-a47a-0242ac110005" in namespace "e2e-tests-containers-2ggjr" to be "success or failure" Jan 30 11:02:55.253: INFO: Pod "client-containers-11216c7c-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.955023ms Jan 30 11:02:57.436: INFO: Pod "client-containers-11216c7c-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212902738s Jan 30 11:02:59.505: INFO: Pod "client-containers-11216c7c-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.281427554s Jan 30 11:03:01.519: INFO: Pod "client-containers-11216c7c-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295858159s Jan 30 11:03:03.542: INFO: Pod "client-containers-11216c7c-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318580476s Jan 30 11:03:05.556: INFO: Pod "client-containers-11216c7c-4350-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.333042897s STEP: Saw pod success Jan 30 11:03:05.556: INFO: Pod "client-containers-11216c7c-4350-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:03:05.573: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-11216c7c-4350-11ea-a47a-0242ac110005 container test-container: STEP: delete the pod Jan 30 11:03:05.857: INFO: Waiting for pod client-containers-11216c7c-4350-11ea-a47a-0242ac110005 to disappear Jan 30 11:03:05.875: INFO: Pod client-containers-11216c7c-4350-11ea-a47a-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:03:05.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-2ggjr" for this suite. Jan 30 11:03:13.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:03:14.139: INFO: namespace: e2e-tests-containers-2ggjr, resource: bindings, ignored listing per whitelist Jan 30 11:03:14.227: INFO: namespace e2e-tests-containers-2ggjr deletion completed in 8.342909861s • [SLOW TEST:19.226 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:03:14.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-wnpwp STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-wnpwp STEP: Deleting pre-stop pod Jan 30 11:03:37.643: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:03:37.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-wnpwp" for this suite. Jan 30 11:04:17.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:04:18.019: INFO: namespace: e2e-tests-prestop-wnpwp, resource: bindings, ignored listing per whitelist Jan 30 11:04:18.038: INFO: namespace e2e-tests-prestop-wnpwp deletion completed in 40.277925063s • [SLOW TEST:63.811 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:04:18.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:04:28.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vsjbl" for this suite. Jan 30 11:04:34.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:04:34.517: INFO: namespace: e2e-tests-emptydir-wrapper-vsjbl, resource: bindings, ignored listing per whitelist Jan 30 11:04:34.649: INFO: namespace e2e-tests-emptydir-wrapper-vsjbl deletion completed in 6.251311774s • [SLOW TEST:16.611 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:04:34.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 30 11:04:34.759: INFO: Waiting up to 5m0s for pod "pod-4c758ce3-4350-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-hrc88" to be "success or failure" Jan 30 11:04:34.853: INFO: Pod "pod-4c758ce3-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 94.022428ms Jan 30 11:04:36.876: INFO: Pod "pod-4c758ce3-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117310378s Jan 30 11:04:38.897: INFO: Pod "pod-4c758ce3-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137820411s Jan 30 11:04:40.914: INFO: Pod "pod-4c758ce3-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154505637s Jan 30 11:04:43.114: INFO: Pod "pod-4c758ce3-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.354548899s Jan 30 11:04:45.863: INFO: Pod "pod-4c758ce3-4350-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.103721576s STEP: Saw pod success Jan 30 11:04:45.863: INFO: Pod "pod-4c758ce3-4350-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:04:45.993: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4c758ce3-4350-11ea-a47a-0242ac110005 container test-container: STEP: delete the pod Jan 30 11:04:46.433: INFO: Waiting for pod pod-4c758ce3-4350-11ea-a47a-0242ac110005 to disappear Jan 30 11:04:46.458: INFO: Pod pod-4c758ce3-4350-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:04:46.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hrc88" for this suite. Jan 30 11:04:52.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:04:52.627: INFO: namespace: e2e-tests-emptydir-hrc88, resource: bindings, ignored listing per whitelist Jan 30 11:04:52.865: INFO: namespace e2e-tests-emptydir-hrc88 deletion completed in 6.386512522s • [SLOW TEST:18.216 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:04:52.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:04:53.168: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jan 30 11:04:53.180: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gx5q5/daemonsets","resourceVersion":"19959286"},"items":null} Jan 30 11:04:53.184: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gx5q5/pods","resourceVersion":"19959286"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:04:53.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-gx5q5" for this suite. Jan 30 11:04:59.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:04:59.367: INFO: namespace: e2e-tests-daemonsets-gx5q5, resource: bindings, ignored listing per whitelist Jan 30 11:04:59.449: INFO: namespace e2e-tests-daemonsets-gx5q5 deletion completed in 6.250939416s S [SKIPPING] [6.582 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:04:53.168: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:04:59.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:04:59.596: INFO: Creating deployment "test-recreate-deployment" Jan 30 11:04:59.645: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 30 11:04:59.811: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jan 30 11:05:01.857: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 30 11:05:01.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979100, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:05:03.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979100, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:05:06.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979100, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:05:07.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979100, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979099, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:05:09.884: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 30 11:05:09.903: INFO: Updating deployment test-recreate-deployment Jan 30 11:05:09.904: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 30 11:05:10.620: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-j4cnb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-j4cnb/deployments/test-recreate-deployment,UID:5b4600e4-4350-11ea-a994-fa163e34d433,ResourceVersion:19959354,Generation:2,CreationTimestamp:2020-01-30 11:04:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-30 11:05:10 +0000 UTC 2020-01-30 11:05:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-30 11:05:10 +0000 UTC 2020-01-30 11:04:59 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 30 11:05:10.647: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-j4cnb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-j4cnb/replicasets/test-recreate-deployment-589c4bfd,UID:618d7e4f-4350-11ea-a994-fa163e34d433,ResourceVersion:19959351,Generation:1,CreationTimestamp:2020-01-30 11:05:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 5b4600e4-4350-11ea-a994-fa163e34d433 0xc001a730ef 0xc001a73100}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 30 11:05:10.647: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 30 11:05:10.648: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-j4cnb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-j4cnb/replicasets/test-recreate-deployment-5bf7f65dc,UID:5b683cae-4350-11ea-a994-fa163e34d433,ResourceVersion:19959341,Generation:2,CreationTimestamp:2020-01-30 11:04:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 5b4600e4-4350-11ea-a994-fa163e34d433 0xc001a73230 0xc001a73231}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 30 11:05:10.670: INFO: Pod "test-recreate-deployment-589c4bfd-zn48h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-zn48h,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-j4cnb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-j4cnb/pods/test-recreate-deployment-589c4bfd-zn48h,UID:618ffba4-4350-11ea-a994-fa163e34d433,ResourceVersion:19959353,Generation:0,CreationTimestamp:2020-01-30 11:05:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 618d7e4f-4350-11ea-a994-fa163e34d433 0xc00011502f 0xc000115040}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6j7wk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j7wk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j7wk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000115280} {node.kubernetes.io/unreachable Exists NoExecute 0xc000115300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:05:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:05:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:05:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:05:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-30 11:05:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:05:10.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-j4cnb" for this suite. Jan 30 11:05:18.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:05:19.098: INFO: namespace: e2e-tests-deployment-j4cnb, resource: bindings, ignored listing per whitelist Jan 30 11:05:19.104: INFO: namespace e2e-tests-deployment-j4cnb deletion completed in 8.405811106s • [SLOW TEST:19.655 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:05:19.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-6p27g/secret-test-670d47b5-4350-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 30 11:05:19.451: INFO: Waiting up to 5m0s for pod "pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005" in namespace "e2e-tests-secrets-6p27g" to be "success or failure" Jan 30 11:05:19.471: INFO: Pod "pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.207422ms Jan 30 11:05:21.489: INFO: Pod "pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037676739s Jan 30 11:05:24.490: INFO: Pod "pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.037915558s Jan 30 11:05:26.522: INFO: Pod "pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.069775443s Jan 30 11:05:28.550: INFO: Pod "pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.098217183s Jan 30 11:05:30.574: INFO: Pod "pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.121740674s STEP: Saw pod success Jan 30 11:05:30.574: INFO: Pod "pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:05:30.589: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005 container env-test: STEP: delete the pod Jan 30 11:05:30.969: INFO: Waiting for pod pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005 to disappear Jan 30 11:05:30.982: INFO: Pod pod-configmaps-6717884c-4350-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:05:30.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6p27g" for this suite. Jan 30 11:05:37.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:05:37.171: INFO: namespace: e2e-tests-secrets-6p27g, resource: bindings, ignored listing per whitelist Jan 30 11:05:37.212: INFO: namespace e2e-tests-secrets-6p27g deletion completed in 6.215937692s • [SLOW TEST:18.107 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:05:37.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 30 11:05:37.575: INFO: Number of nodes with available pods: 0 Jan 30 11:05:37.575: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:38.608: INFO: Number of nodes with available pods: 0 Jan 30 11:05:38.608: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:39.611: INFO: Number of nodes with available pods: 0 Jan 30 11:05:39.611: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:40.624: INFO: Number of nodes with available pods: 0 Jan 30 11:05:40.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:41.646: INFO: Number of nodes with available pods: 0 Jan 30 11:05:41.647: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:43.581: INFO: Number of nodes with available pods: 0 Jan 30 11:05:43.582: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:44.622: INFO: Number of nodes with available pods: 0 Jan 30 11:05:44.623: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:45.607: INFO: Number of nodes with available pods: 0 Jan 30 11:05:45.607: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:46.622: INFO: Number of nodes with available pods: 0 Jan 30 11:05:46.622: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:47.598: INFO: Number of nodes with available pods: 1 Jan 30 11:05:47.598: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 30 11:05:47.731: INFO: Number of nodes with available pods: 0 Jan 30 11:05:47.731: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:48.755: INFO: Number of nodes with available pods: 0 Jan 30 11:05:48.756: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:49.771: INFO: Number of nodes with available pods: 0 Jan 30 11:05:49.771: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:50.781: INFO: Number of nodes with available pods: 0 Jan 30 11:05:50.781: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:52.263: INFO: Number of nodes with available pods: 0 Jan 30 11:05:52.264: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:52.751: INFO: Number of nodes with available pods: 0 Jan 30 11:05:52.751: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:53.749: INFO: Number of nodes with available pods: 0 Jan 30 11:05:53.749: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:54.759: INFO: Number of nodes with available pods: 0 Jan 30 11:05:54.759: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:55.772: INFO: Number of nodes with available pods: 0 Jan 30 11:05:55.772: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:56.764: INFO: Number of nodes with available pods: 0 Jan 30 11:05:56.764: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:57.769: INFO: Number of nodes with available pods: 0 Jan 30 11:05:57.769: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:58.778: INFO: Number of nodes with available pods: 0 Jan 30 11:05:58.778: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:05:59.758: INFO: Number of nodes with available pods: 0 Jan 30 11:05:59.758: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:00.774: INFO: Number of nodes with available pods: 0 Jan 30 11:06:00.774: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:01.757: INFO: Number of nodes with available pods: 0 Jan 30 11:06:01.757: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:02.799: INFO: Number of nodes with available pods: 0 Jan 30 11:06:02.800: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:03.767: INFO: Number of nodes with available pods: 0 Jan 30 11:06:03.768: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:05.170: INFO: Number of nodes with available pods: 0 Jan 30 11:06:05.170: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:05.815: INFO: Number of nodes with available pods: 0 Jan 30 11:06:05.815: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:06.751: INFO: Number of nodes with available pods: 0 Jan 30 11:06:06.751: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:07.764: INFO: Number of nodes with available pods: 0 Jan 30 11:06:07.764: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:08.760: INFO: Number of nodes with available pods: 0 Jan 30 11:06:08.761: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:09.761: INFO: Number of nodes with available pods: 0 Jan 30 11:06:09.761: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:06:10.805: INFO: Number of nodes with available pods: 1 Jan 30 11:06:10.805: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-w9896, will wait for the garbage collector to delete the pods Jan 30 11:06:11.010: INFO: Deleting DaemonSet.extensions daemon-set took: 116.374858ms Jan 30 11:06:11.111: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.627429ms Jan 30 11:06:22.732: INFO: Number of nodes with available pods: 0 Jan 30 11:06:22.732: INFO: Number of running nodes: 0, number of available pods: 0 Jan 30 11:06:22.743: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-w9896/daemonsets","resourceVersion":"19959528"},"items":null} Jan 30 11:06:22.819: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-w9896/pods","resourceVersion":"19959528"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:06:22.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-w9896" for this suite. Jan 30 11:06:28.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:06:29.036: INFO: namespace: e2e-tests-daemonsets-w9896, resource: bindings, ignored listing per whitelist Jan 30 11:06:29.130: INFO: namespace e2e-tests-daemonsets-w9896 deletion completed in 6.287029624s • [SLOW TEST:51.918 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:06:29.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-90c5186a-4350-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 30 11:06:29.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005" in namespace "e2e-tests-configmap-88s5n" to be "success or failure" Jan 30 11:06:29.434: INFO: Pod "pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.546217ms Jan 30 11:06:31.891: INFO: Pod "pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471837242s Jan 30 11:06:33.911: INFO: Pod "pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.491125186s Jan 30 11:06:36.159: INFO: Pod "pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.739500629s Jan 30 11:06:38.342: INFO: Pod "pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.922093561s Jan 30 11:06:40.441: INFO: Pod "pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.021101836s STEP: Saw pod success Jan 30 11:06:40.441: INFO: Pod "pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:06:40.454: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 30 11:06:40.598: INFO: Waiting for pod pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005 to disappear Jan 30 11:06:40.611: INFO: Pod pod-configmaps-90c83c82-4350-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:06:40.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-88s5n" for this suite. Jan 30 11:06:46.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:06:46.862: INFO: namespace: e2e-tests-configmap-88s5n, resource: bindings, ignored listing per whitelist Jan 30 11:06:46.884: INFO: namespace e2e-tests-configmap-88s5n deletion completed in 6.254991747s • [SLOW TEST:17.754 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:06:46.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:06:47.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jan 30 11:06:47.204: INFO: stderr: "" Jan 30 11:06:47.204: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jan 30 11:06:47.210: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:06:47.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rrth7" for this suite. Jan 30 11:06:53.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:06:53.404: INFO: namespace: e2e-tests-kubectl-rrth7, resource: bindings, ignored listing per whitelist Jan 30 11:06:53.421: INFO: namespace e2e-tests-kubectl-rrth7 deletion completed in 6.185734631s S [SKIPPING] [6.537 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:06:47.210: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:06:53.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-qbrd STEP: Creating a pod to test atomic-volume-subpath Jan 30 11:06:53.915: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qbrd" in namespace "e2e-tests-subpath-qgnph" to be "success or failure" Jan 30 11:06:54.005: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.995044ms Jan 30 11:06:56.198: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282640814s Jan 30 11:06:58.215: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299594539s Jan 30 11:07:00.302: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386210151s Jan 30 11:07:02.326: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.410142891s Jan 30 11:07:04.352: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.436534218s Jan 30 11:07:06.568: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.65280034s Jan 30 11:07:08.586: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Running", Reason="", readiness=false. Elapsed: 14.671097304s Jan 30 11:07:10.624: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Running", Reason="", readiness=false. Elapsed: 16.708791743s Jan 30 11:07:12.635: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Running", Reason="", readiness=false. Elapsed: 18.71939292s Jan 30 11:07:14.688: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Running", Reason="", readiness=false. Elapsed: 20.772309485s Jan 30 11:07:16.708: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Running", Reason="", readiness=false. Elapsed: 22.792810161s Jan 30 11:07:18.788: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Running", Reason="", readiness=false. Elapsed: 24.872256466s Jan 30 11:07:20.813: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Running", Reason="", readiness=false. Elapsed: 26.897137703s Jan 30 11:07:22.838: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Running", Reason="", readiness=false. Elapsed: 28.922511948s Jan 30 11:07:24.860: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Running", Reason="", readiness=false. Elapsed: 30.944711013s Jan 30 11:07:26.875: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Running", Reason="", readiness=false. Elapsed: 32.959732467s Jan 30 11:07:28.890: INFO: Pod "pod-subpath-test-configmap-qbrd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.974747555s STEP: Saw pod success Jan 30 11:07:28.890: INFO: Pod "pod-subpath-test-configmap-qbrd" satisfied condition "success or failure" Jan 30 11:07:28.896: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-qbrd container test-container-subpath-configmap-qbrd: STEP: delete the pod Jan 30 11:07:29.153: INFO: Waiting for pod pod-subpath-test-configmap-qbrd to disappear Jan 30 11:07:29.164: INFO: Pod pod-subpath-test-configmap-qbrd no longer exists STEP: Deleting pod pod-subpath-test-configmap-qbrd Jan 30 11:07:29.165: INFO: Deleting pod "pod-subpath-test-configmap-qbrd" in namespace "e2e-tests-subpath-qgnph" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:07:29.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-qgnph" for this suite. Jan 30 11:07:37.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:07:37.455: INFO: namespace: e2e-tests-subpath-qgnph, resource: bindings, ignored listing per whitelist Jan 30 11:07:37.518: INFO: namespace e2e-tests-subpath-qgnph deletion completed in 8.329439953s • [SLOW TEST:44.097 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:07:37.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:07:37.924: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 30 11:07:43.853: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 30 11:07:47.892: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 30 11:07:49.905: INFO: Creating deployment "test-rollover-deployment" Jan 30 11:07:49.935: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 30 11:07:52.016: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 30 11:07:52.037: INFO: Ensure that both replica sets have 1 created replica Jan 30 11:07:52.480: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 30 11:07:53.399: INFO: Updating deployment test-rollover-deployment Jan 30 11:07:53.399: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 30 11:07:55.539: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 30 11:07:55.878: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 30 11:07:55.912: INFO: all replica sets need to contain the pod-template-hash label Jan 30 11:07:55.913: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979273, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:07:57.951: INFO: all replica sets need to contain the pod-template-hash label Jan 30 11:07:57.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979273, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:08:00.672: INFO: all replica sets need to contain the pod-template-hash label Jan 30 11:08:00.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979273, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:08:02.058: INFO: all replica sets need to contain the pod-template-hash label Jan 30 11:08:02.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979273, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:08:03.934: INFO: all replica sets need to contain the pod-template-hash label Jan 30 11:08:03.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979283, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:08:05.936: INFO: all replica sets need to contain the pod-template-hash label Jan 30 11:08:05.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979283, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:08:07.938: INFO: all replica sets need to contain the pod-template-hash label Jan 30 11:08:07.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979283, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:08:09.939: INFO: all replica sets need to contain the pod-template-hash label Jan 30 11:08:09.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979283, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:08:11.958: INFO: all replica sets need to contain the pod-template-hash label Jan 30 11:08:11.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979283, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:08:14.050: INFO: Jan 30 11:08:14.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979293, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715979270, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 11:08:16.067: INFO: Jan 30 11:08:16.067: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 30 11:08:16.090: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-d6vk9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d6vk9/deployments/test-rollover-deployment,UID:c0c962c5-4350-11ea-a994-fa163e34d433,ResourceVersion:19959823,Generation:2,CreationTimestamp:2020-01-30 11:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-30 11:07:50 +0000 UTC 2020-01-30 11:07:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-30 11:08:14 +0000 UTC 2020-01-30 11:07:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 30 11:08:16.107: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-d6vk9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d6vk9/replicasets/test-rollover-deployment-5b8479fdb6,UID:c2db9e33-4350-11ea-a994-fa163e34d433,ResourceVersion:19959814,Generation:2,CreationTimestamp:2020-01-30 11:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c0c962c5-4350-11ea-a994-fa163e34d433 0xc001ab5eb7 0xc001ab5eb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 30 11:08:16.107: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 30 11:08:16.108: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-d6vk9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d6vk9/replicasets/test-rollover-controller,UID:b99c06d9-4350-11ea-a994-fa163e34d433,ResourceVersion:19959822,Generation:2,CreationTimestamp:2020-01-30 11:07:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c0c962c5-4350-11ea-a994-fa163e34d433 0xc001ab5cf7 0xc001ab5cf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 30 11:08:16.109: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-d6vk9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d6vk9/replicasets/test-rollover-deployment-58494b7559,UID:c0de39a6-4350-11ea-a994-fa163e34d433,ResourceVersion:19959778,Generation:2,CreationTimestamp:2020-01-30 11:07:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c0c962c5-4350-11ea-a994-fa163e34d433 0xc001ab5dd7 0xc001ab5dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 30 11:08:16.130: INFO: Pod "test-rollover-deployment-5b8479fdb6-tbpld" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-tbpld,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-d6vk9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d6vk9/pods/test-rollover-deployment-5b8479fdb6-tbpld,UID:c300179d-4350-11ea-a994-fa163e34d433,ResourceVersion:19959799,Generation:0,CreationTimestamp:2020-01-30 11:07:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 c2db9e33-4350-11ea-a994-fa163e34d433 0xc0023aea77 0xc0023aea78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cxk97 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cxk97,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-cxk97 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023aeae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023aeb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:07:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:08:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:08:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:07:53 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-30 11:07:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-30 11:08:03 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://150fafd1b489a392683a39d0ee6375991ea09b665bb28208d620132adfa6ecc0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:08:16.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-d6vk9" for this suite. Jan 30 11:08:24.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:08:24.267: INFO: namespace: e2e-tests-deployment-d6vk9, resource: bindings, ignored listing per whitelist Jan 30 11:08:24.345: INFO: namespace e2e-tests-deployment-d6vk9 deletion completed in 8.174594475s • [SLOW TEST:46.826 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:08:24.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 30 11:08:25.243: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 30 11:08:30.294: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:08:31.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-fsm7x" for this suite. Jan 30 11:08:40.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:08:41.050: INFO: namespace: e2e-tests-replication-controller-fsm7x, resource: bindings, ignored listing per whitelist Jan 30 11:08:41.064: INFO: namespace e2e-tests-replication-controller-fsm7x deletion completed in 9.437658574s • [SLOW TEST:16.719 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:08:41.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 30 11:08:41.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-dtqxr' Jan 30 11:08:41.637: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 30 11:08:41.638: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jan 30 11:08:45.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-dtqxr' Jan 30 11:08:45.995: INFO: stderr: "" Jan 30 11:08:45.995: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:08:45.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dtqxr" for this suite. Jan 30 11:09:10.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:09:10.244: INFO: namespace: e2e-tests-kubectl-dtqxr, resource: bindings, ignored listing per whitelist Jan 30 11:09:10.324: INFO: namespace e2e-tests-kubectl-dtqxr deletion completed in 24.302620197s • [SLOW TEST:29.260 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:09:10.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 30 11:09:10.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-795n7" to be "success or failure" Jan 30 11:09:10.748: INFO: Pod "downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.364027ms Jan 30 11:09:12.758: INFO: Pod "downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014748072s Jan 30 11:09:14.779: INFO: Pod "downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036026961s Jan 30 11:09:17.198: INFO: Pod "downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.454532754s Jan 30 11:09:19.208: INFO: Pod "downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.464558234s Jan 30 11:09:21.216: INFO: Pod "downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.473140502s STEP: Saw pod success Jan 30 11:09:21.216: INFO: Pod "downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:09:21.220: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005 container client-container: STEP: delete the pod Jan 30 11:09:21.387: INFO: Waiting for pod downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005 to disappear Jan 30 11:09:22.058: INFO: Pod downwardapi-volume-f0f3e1cc-4350-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:09:22.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-795n7" for this suite. Jan 30 11:09:28.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:09:28.606: INFO: namespace: e2e-tests-downward-api-795n7, resource: bindings, ignored listing per whitelist Jan 30 11:09:28.734: INFO: namespace e2e-tests-downward-api-795n7 deletion completed in 6.656178283s • [SLOW TEST:18.410 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:09:28.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 30 11:09:29.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-sfqkq' Jan 30 11:09:29.286: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 30 11:09:29.286: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 30 11:09:29.321: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-v99x4] Jan 30 11:09:29.322: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-v99x4" in namespace "e2e-tests-kubectl-sfqkq" to be "running and ready" Jan 30 11:09:29.538: INFO: Pod "e2e-test-nginx-rc-v99x4": Phase="Pending", Reason="", readiness=false. Elapsed: 216.390851ms Jan 30 11:09:31.571: INFO: Pod "e2e-test-nginx-rc-v99x4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249342529s Jan 30 11:09:33.597: INFO: Pod "e2e-test-nginx-rc-v99x4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27552064s Jan 30 11:09:35.636: INFO: Pod "e2e-test-nginx-rc-v99x4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314758686s Jan 30 11:09:37.651: INFO: Pod "e2e-test-nginx-rc-v99x4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.328906267s Jan 30 11:09:39.675: INFO: Pod "e2e-test-nginx-rc-v99x4": Phase="Running", Reason="", readiness=true. Elapsed: 10.353035706s Jan 30 11:09:39.675: INFO: Pod "e2e-test-nginx-rc-v99x4" satisfied condition "running and ready" Jan 30 11:09:39.675: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-v99x4] Jan 30 11:09:39.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-sfqkq' Jan 30 11:09:39.945: INFO: stderr: "" Jan 30 11:09:39.945: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jan 30 11:09:39.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-sfqkq' Jan 30 11:09:40.106: INFO: stderr: "" Jan 30 11:09:40.106: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:09:40.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sfqkq" for this suite. Jan 30 11:10:04.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:10:04.571: INFO: namespace: e2e-tests-kubectl-sfqkq, resource: bindings, ignored listing per whitelist Jan 30 11:10:04.613: INFO: namespace e2e-tests-kubectl-sfqkq deletion completed in 24.497661096s • [SLOW TEST:35.878 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:10:04.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-11354f09-4351-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 30 11:10:04.860: INFO: Waiting up to 5m0s for pod "pod-secrets-11364c44-4351-11ea-a47a-0242ac110005" in namespace "e2e-tests-secrets-rjkq7" to be "success or failure" Jan 30 11:10:04.878: INFO: Pod "pod-secrets-11364c44-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.55444ms Jan 30 11:10:06.949: INFO: Pod "pod-secrets-11364c44-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08962222s Jan 30 11:10:08.967: INFO: Pod "pod-secrets-11364c44-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107434673s Jan 30 11:10:11.290: INFO: Pod "pod-secrets-11364c44-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430555692s Jan 30 11:10:13.322: INFO: Pod "pod-secrets-11364c44-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.461956211s Jan 30 11:10:15.349: INFO: Pod "pod-secrets-11364c44-4351-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.4891254s STEP: Saw pod success Jan 30 11:10:15.349: INFO: Pod "pod-secrets-11364c44-4351-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:10:15.371: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-11364c44-4351-11ea-a47a-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 30 11:10:15.536: INFO: Waiting for pod pod-secrets-11364c44-4351-11ea-a47a-0242ac110005 to disappear Jan 30 11:10:15.544: INFO: Pod pod-secrets-11364c44-4351-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:10:15.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rjkq7" for this suite. Jan 30 11:10:21.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:10:21.718: INFO: namespace: e2e-tests-secrets-rjkq7, resource: bindings, ignored listing per whitelist Jan 30 11:10:21.880: INFO: namespace e2e-tests-secrets-rjkq7 deletion completed in 6.328595881s • [SLOW TEST:17.266 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:10:21.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-1bc3935b-4351-11ea-a47a-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-1bc395cc-4351-11ea-a47a-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1bc3935b-4351-11ea-a47a-0242ac110005 STEP: Updating configmap cm-test-opt-upd-1bc395cc-4351-11ea-a47a-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-1bc39647-4351-11ea-a47a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:10:41.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-p4sjx" for this suite. Jan 30 11:11:05.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:11:05.366: INFO: namespace: e2e-tests-configmap-p4sjx, resource: bindings, ignored listing per whitelist Jan 30 11:11:05.369: INFO: namespace e2e-tests-configmap-p4sjx deletion completed in 24.303725842s • [SLOW TEST:43.488 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:11:05.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3563a187-4351-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 30 11:11:05.613: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-252dh" to be "success or failure" Jan 30 11:11:05.844: INFO: Pod "pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 231.047768ms Jan 30 11:11:07.916: INFO: Pod "pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302651639s Jan 30 11:11:09.945: INFO: Pod "pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331222119s Jan 30 11:11:11.967: INFO: Pod "pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353709982s Jan 30 11:11:14.353: INFO: Pod "pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.739216275s Jan 30 11:11:16.369: INFO: Pod "pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.755331816s STEP: Saw pod success Jan 30 11:11:16.369: INFO: Pod "pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:11:16.375: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 30 11:11:17.126: INFO: Waiting for pod pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005 to disappear Jan 30 11:11:17.725: INFO: Pod pod-projected-secrets-356c7ca2-4351-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:11:17.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-252dh" for this suite. Jan 30 11:11:24.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:11:24.220: INFO: namespace: e2e-tests-projected-252dh, resource: bindings, ignored listing per whitelist Jan 30 11:11:24.250: INFO: namespace e2e-tests-projected-252dh deletion completed in 6.49314761s • [SLOW TEST:18.881 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:11:24.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-40af3225-4351-11ea-a47a-0242ac110005 STEP: Creating secret with name s-test-opt-upd-40af3459-4351-11ea-a47a-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-40af3225-4351-11ea-a47a-0242ac110005 STEP: Updating secret s-test-opt-upd-40af3459-4351-11ea-a47a-0242ac110005 STEP: Creating secret with name s-test-opt-create-40af34da-4351-11ea-a47a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:12:47.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wbqd2" for this suite. Jan 30 11:13:13.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:13:13.435: INFO: namespace: e2e-tests-secrets-wbqd2, resource: bindings, ignored listing per whitelist Jan 30 11:13:13.600: INFO: namespace e2e-tests-secrets-wbqd2 deletion completed in 26.337500562s • [SLOW TEST:109.350 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:13:13.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 30 11:13:14.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-vg5s6" to be "success or failure" Jan 30 11:13:14.100: INFO: Pod "downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.894888ms Jan 30 11:13:16.447: INFO: Pod "downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.365691959s Jan 30 11:13:18.478: INFO: Pod "downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.39610603s Jan 30 11:13:20.559: INFO: Pod "downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4778234s Jan 30 11:13:22.586: INFO: Pod "downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.504232336s Jan 30 11:13:24.655: INFO: Pod "downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.573444868s STEP: Saw pod success Jan 30 11:13:24.655: INFO: Pod "downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:13:24.682: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005 container client-container: STEP: delete the pod Jan 30 11:13:24.922: INFO: Waiting for pod downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005 to disappear Jan 30 11:13:24.935: INFO: Pod downwardapi-volume-81faac49-4351-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:13:24.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vg5s6" for this suite. Jan 30 11:13:31.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:13:31.269: INFO: namespace: e2e-tests-projected-vg5s6, resource: bindings, ignored listing per whitelist Jan 30 11:13:31.291: INFO: namespace e2e-tests-projected-vg5s6 deletion completed in 6.345478739s • [SLOW TEST:17.690 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:13:31.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-8c5f061f-4351-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 30 11:13:31.495: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005" in namespace "e2e-tests-configmap-728xs" to be "success or failure" Jan 30 11:13:31.507: INFO: Pod "pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.24144ms Jan 30 11:13:33.524: INFO: Pod "pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028856928s Jan 30 11:13:35.542: INFO: Pod "pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046782709s Jan 30 11:13:37.569: INFO: Pod "pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074507743s Jan 30 11:13:39.963: INFO: Pod "pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.467650372s Jan 30 11:13:41.977: INFO: Pod "pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.482230251s STEP: Saw pod success Jan 30 11:13:41.977: INFO: Pod "pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:13:41.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 30 11:13:42.124: INFO: Waiting for pod pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005 to disappear Jan 30 11:13:42.352: INFO: Pod pod-configmaps-8c5fbd45-4351-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:13:42.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-728xs" for this suite. Jan 30 11:13:48.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:13:48.605: INFO: namespace: e2e-tests-configmap-728xs, resource: bindings, ignored listing per whitelist Jan 30 11:13:48.679: INFO: namespace e2e-tests-configmap-728xs deletion completed in 6.305765374s • [SLOW TEST:17.388 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:13:48.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 30 11:13:48.903: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-78d2p" to be "success or failure" Jan 30 11:13:48.929: INFO: Pod "downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.395253ms Jan 30 11:13:50.946: INFO: Pod "downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042569781s Jan 30 11:13:52.966: INFO: Pod "downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06222338s Jan 30 11:13:55.603: INFO: Pod "downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.699856221s Jan 30 11:13:57.628: INFO: Pod "downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724135876s Jan 30 11:13:59.643: INFO: Pod "downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.73970177s STEP: Saw pod success Jan 30 11:13:59.643: INFO: Pod "downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:13:59.647: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005 container client-container: STEP: delete the pod Jan 30 11:14:00.800: INFO: Waiting for pod downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005 to disappear Jan 30 11:14:01.409: INFO: Pod downwardapi-volume-96b7a502-4351-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:14:01.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-78d2p" for this suite. Jan 30 11:14:07.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:14:07.819: INFO: namespace: e2e-tests-downward-api-78d2p, resource: bindings, ignored listing per whitelist Jan 30 11:14:07.893: INFO: namespace e2e-tests-downward-api-78d2p deletion completed in 6.458875162s • [SLOW TEST:19.213 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:14:07.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 30 11:14:34.506: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5xxfk PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:14:34.506: INFO: >>> kubeConfig: /root/.kube/config I0130 11:14:34.654946 8 log.go:172] (0xc000ac8370) (0xc002216280) Create stream I0130 11:14:34.655164 8 log.go:172] (0xc000ac8370) (0xc002216280) Stream added, broadcasting: 1 I0130 11:14:34.663068 8 log.go:172] (0xc000ac8370) Reply frame received for 1 I0130 11:14:34.663127 8 log.go:172] (0xc000ac8370) (0xc002216320) Create stream I0130 11:14:34.663141 8 log.go:172] (0xc000ac8370) (0xc002216320) Stream added, broadcasting: 3 I0130 11:14:34.664667 8 log.go:172] (0xc000ac8370) Reply frame received for 3 I0130 11:14:34.664693 8 log.go:172] (0xc000ac8370) (0xc000f4e140) Create stream I0130 11:14:34.664709 8 log.go:172] (0xc000ac8370) (0xc000f4e140) Stream added, broadcasting: 5 I0130 11:14:34.666749 8 log.go:172] (0xc000ac8370) Reply frame received for 5 I0130 11:14:34.881519 8 log.go:172] (0xc000ac8370) Data frame received for 3 I0130 11:14:34.881729 8 log.go:172] (0xc002216320) (3) Data frame handling I0130 11:14:34.881788 8 log.go:172] (0xc002216320) (3) Data frame sent I0130 11:14:35.138514 8 log.go:172] (0xc000ac8370) Data frame received for 1 I0130 11:14:35.138685 8 log.go:172] (0xc000ac8370) (0xc000f4e140) Stream removed, broadcasting: 5 I0130 11:14:35.138824 8 log.go:172] (0xc002216280) (1) Data frame handling I0130 11:14:35.138878 8 log.go:172] (0xc002216280) (1) Data frame sent I0130 11:14:35.138960 8 log.go:172] (0xc000ac8370) (0xc002216320) Stream removed, broadcasting: 3 I0130 11:14:35.139072 8 log.go:172] (0xc000ac8370) (0xc002216280) Stream removed, broadcasting: 1 I0130 11:14:35.139127 8 log.go:172] (0xc000ac8370) Go away received I0130 11:14:35.139617 8 log.go:172] (0xc000ac8370) (0xc002216280) Stream removed, broadcasting: 1 I0130 11:14:35.139638 8 log.go:172] (0xc000ac8370) (0xc002216320) Stream removed, broadcasting: 3 I0130 11:14:35.139651 8 log.go:172] (0xc000ac8370) (0xc000f4e140) Stream removed, broadcasting: 5 Jan 30 11:14:35.139: INFO: Exec stderr: "" Jan 30 11:14:35.139: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5xxfk PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:14:35.140: INFO: >>> kubeConfig: /root/.kube/config I0130 11:14:35.231198 8 log.go:172] (0xc000ac8840) (0xc0022165a0) Create stream I0130 11:14:35.231364 8 log.go:172] (0xc000ac8840) (0xc0022165a0) Stream added, broadcasting: 1 I0130 11:14:35.236642 8 log.go:172] (0xc000ac8840) Reply frame received for 1 I0130 11:14:35.236690 8 log.go:172] (0xc000ac8840) (0xc000f4e1e0) Create stream I0130 11:14:35.236702 8 log.go:172] (0xc000ac8840) (0xc000f4e1e0) Stream added, broadcasting: 3 I0130 11:14:35.239397 8 log.go:172] (0xc000ac8840) Reply frame received for 3 I0130 11:14:35.239457 8 log.go:172] (0xc000ac8840) (0xc001d08000) Create stream I0130 11:14:35.239473 8 log.go:172] (0xc000ac8840) (0xc001d08000) Stream added, broadcasting: 5 I0130 11:14:35.240707 8 log.go:172] (0xc000ac8840) Reply frame received for 5 I0130 11:14:35.370318 8 log.go:172] (0xc000ac8840) Data frame received for 3 I0130 11:14:35.370513 8 log.go:172] (0xc000f4e1e0) (3) Data frame handling I0130 11:14:35.370599 8 log.go:172] (0xc000f4e1e0) (3) Data frame sent I0130 11:14:35.517067 8 log.go:172] (0xc000ac8840) Data frame received for 1 I0130 11:14:35.517215 8 log.go:172] (0xc000ac8840) (0xc001d08000) Stream removed, broadcasting: 5 I0130 11:14:35.517254 8 log.go:172] (0xc0022165a0) (1) Data frame handling I0130 11:14:35.517299 8 log.go:172] (0xc0022165a0) (1) Data frame sent I0130 11:14:35.517360 8 log.go:172] (0xc000ac8840) (0xc000f4e1e0) Stream removed, broadcasting: 3 I0130 11:14:35.517423 8 log.go:172] (0xc000ac8840) (0xc0022165a0) Stream removed, broadcasting: 1 I0130 11:14:35.517446 8 log.go:172] (0xc000ac8840) Go away received I0130 11:14:35.517673 8 log.go:172] (0xc000ac8840) (0xc0022165a0) Stream removed, broadcasting: 1 I0130 11:14:35.517688 8 log.go:172] (0xc000ac8840) (0xc000f4e1e0) Stream removed, broadcasting: 3 I0130 11:14:35.517694 8 log.go:172] (0xc000ac8840) (0xc001d08000) Stream removed, broadcasting: 5 Jan 30 11:14:35.517: INFO: Exec stderr: "" Jan 30 11:14:35.517: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5xxfk PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:14:35.517: INFO: >>> kubeConfig: /root/.kube/config I0130 11:14:35.611757 8 log.go:172] (0xc000ac8d10) (0xc002216820) Create stream I0130 11:14:35.612374 8 log.go:172] (0xc000ac8d10) (0xc002216820) Stream added, broadcasting: 1 I0130 11:14:35.622301 8 log.go:172] (0xc000ac8d10) Reply frame received for 1 I0130 11:14:35.622363 8 log.go:172] (0xc000ac8d10) (0xc001e380a0) Create stream I0130 11:14:35.622379 8 log.go:172] (0xc000ac8d10) (0xc001e380a0) Stream added, broadcasting: 3 I0130 11:14:35.624015 8 log.go:172] (0xc000ac8d10) Reply frame received for 3 I0130 11:14:35.624050 8 log.go:172] (0xc000ac8d10) (0xc001e38140) Create stream I0130 11:14:35.624064 8 log.go:172] (0xc000ac8d10) (0xc001e38140) Stream added, broadcasting: 5 I0130 11:14:35.627722 8 log.go:172] (0xc000ac8d10) Reply frame received for 5 I0130 11:14:35.733337 8 log.go:172] (0xc000ac8d10) Data frame received for 3 I0130 11:14:35.733478 8 log.go:172] (0xc001e380a0) (3) Data frame handling I0130 11:14:35.733513 8 log.go:172] (0xc001e380a0) (3) Data frame sent I0130 11:14:35.913124 8 log.go:172] (0xc000ac8d10) (0xc001e380a0) Stream removed, broadcasting: 3 I0130 11:14:35.913298 8 log.go:172] (0xc000ac8d10) Data frame received for 1 I0130 11:14:35.913314 8 log.go:172] (0xc002216820) (1) Data frame handling I0130 11:14:35.913339 8 log.go:172] (0xc002216820) (1) Data frame sent I0130 11:14:35.913387 8 log.go:172] (0xc000ac8d10) (0xc002216820) Stream removed, broadcasting: 1 I0130 11:14:35.913713 8 log.go:172] (0xc000ac8d10) (0xc001e38140) Stream removed, broadcasting: 5 I0130 11:14:35.913743 8 log.go:172] (0xc000ac8d10) Go away received I0130 11:14:35.914158 8 log.go:172] (0xc000ac8d10) (0xc002216820) Stream removed, broadcasting: 1 I0130 11:14:35.914177 8 log.go:172] (0xc000ac8d10) (0xc001e380a0) Stream removed, broadcasting: 3 I0130 11:14:35.914189 8 log.go:172] (0xc000ac8d10) (0xc001e38140) Stream removed, broadcasting: 5 Jan 30 11:14:35.914: INFO: Exec stderr: "" Jan 30 11:14:35.914: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5xxfk PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:14:35.914: INFO: >>> kubeConfig: /root/.kube/config I0130 11:14:36.037090 8 log.go:172] (0xc000ac91e0) (0xc002216a00) Create stream I0130 11:14:36.037302 8 log.go:172] (0xc000ac91e0) (0xc002216a00) Stream added, broadcasting: 1 I0130 11:14:36.044062 8 log.go:172] (0xc000ac91e0) Reply frame received for 1 I0130 11:14:36.044095 8 log.go:172] (0xc000ac91e0) (0xc000f4e280) Create stream I0130 11:14:36.044104 8 log.go:172] (0xc000ac91e0) (0xc000f4e280) Stream added, broadcasting: 3 I0130 11:14:36.045673 8 log.go:172] (0xc000ac91e0) Reply frame received for 3 I0130 11:14:36.045711 8 log.go:172] (0xc000ac91e0) (0xc0023a81e0) Create stream I0130 11:14:36.045722 8 log.go:172] (0xc000ac91e0) (0xc0023a81e0) Stream added, broadcasting: 5 I0130 11:14:36.046973 8 log.go:172] (0xc000ac91e0) Reply frame received for 5 I0130 11:14:36.231966 8 log.go:172] (0xc000ac91e0) Data frame received for 3 I0130 11:14:36.232025 8 log.go:172] (0xc000f4e280) (3) Data frame handling I0130 11:14:36.232045 8 log.go:172] (0xc000f4e280) (3) Data frame sent I0130 11:14:36.347991 8 log.go:172] (0xc000ac91e0) Data frame received for 1 I0130 11:14:36.348054 8 log.go:172] (0xc000ac91e0) (0xc0023a81e0) Stream removed, broadcasting: 5 I0130 11:14:36.348101 8 log.go:172] (0xc002216a00) (1) Data frame handling I0130 11:14:36.348131 8 log.go:172] (0xc002216a00) (1) Data frame sent I0130 11:14:36.348188 8 log.go:172] (0xc000ac91e0) (0xc000f4e280) Stream removed, broadcasting: 3 I0130 11:14:36.348226 8 log.go:172] (0xc000ac91e0) (0xc002216a00) Stream removed, broadcasting: 1 I0130 11:14:36.348309 8 log.go:172] (0xc000ac91e0) Go away received I0130 11:14:36.348406 8 log.go:172] (0xc000ac91e0) (0xc002216a00) Stream removed, broadcasting: 1 I0130 11:14:36.348414 8 log.go:172] (0xc000ac91e0) (0xc000f4e280) Stream removed, broadcasting: 3 I0130 11:14:36.348422 8 log.go:172] (0xc000ac91e0) (0xc0023a81e0) Stream removed, broadcasting: 5 Jan 30 11:14:36.348: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 30 11:14:36.348: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5xxfk PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:14:36.348: INFO: >>> kubeConfig: /root/.kube/config I0130 11:14:36.420363 8 log.go:172] (0xc000ac96b0) (0xc002216d20) Create stream I0130 11:14:36.420647 8 log.go:172] (0xc000ac96b0) (0xc002216d20) Stream added, broadcasting: 1 I0130 11:14:36.429262 8 log.go:172] (0xc000ac96b0) Reply frame received for 1 I0130 11:14:36.429310 8 log.go:172] (0xc000ac96b0) (0xc0023a8280) Create stream I0130 11:14:36.429325 8 log.go:172] (0xc000ac96b0) (0xc0023a8280) Stream added, broadcasting: 3 I0130 11:14:36.430804 8 log.go:172] (0xc000ac96b0) Reply frame received for 3 I0130 11:14:36.430862 8 log.go:172] (0xc000ac96b0) (0xc000f4e3c0) Create stream I0130 11:14:36.430869 8 log.go:172] (0xc000ac96b0) (0xc000f4e3c0) Stream added, broadcasting: 5 I0130 11:14:36.431978 8 log.go:172] (0xc000ac96b0) Reply frame received for 5 I0130 11:14:36.621349 8 log.go:172] (0xc000ac96b0) Data frame received for 3 I0130 11:14:36.621483 8 log.go:172] (0xc0023a8280) (3) Data frame handling I0130 11:14:36.621505 8 log.go:172] (0xc0023a8280) (3) Data frame sent I0130 11:14:36.720863 8 log.go:172] (0xc000ac96b0) Data frame received for 1 I0130 11:14:36.720962 8 log.go:172] (0xc000ac96b0) (0xc0023a8280) Stream removed, broadcasting: 3 I0130 11:14:36.721050 8 log.go:172] (0xc002216d20) (1) Data frame handling I0130 11:14:36.721083 8 log.go:172] (0xc002216d20) (1) Data frame sent I0130 11:14:36.721094 8 log.go:172] (0xc000ac96b0) (0xc000f4e3c0) Stream removed, broadcasting: 5 I0130 11:14:36.721166 8 log.go:172] (0xc000ac96b0) (0xc002216d20) Stream removed, broadcasting: 1 I0130 11:14:36.721183 8 log.go:172] (0xc000ac96b0) Go away received I0130 11:14:36.721594 8 log.go:172] (0xc000ac96b0) (0xc002216d20) Stream removed, broadcasting: 1 I0130 11:14:36.721616 8 log.go:172] (0xc000ac96b0) (0xc0023a8280) Stream removed, broadcasting: 3 I0130 11:14:36.721626 8 log.go:172] (0xc000ac96b0) (0xc000f4e3c0) Stream removed, broadcasting: 5 Jan 30 11:14:36.721: INFO: Exec stderr: "" Jan 30 11:14:36.721: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5xxfk PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:14:36.721: INFO: >>> kubeConfig: /root/.kube/config I0130 11:14:36.876654 8 log.go:172] (0xc000570580) (0xc001d08640) Create stream I0130 11:14:36.876911 8 log.go:172] (0xc000570580) (0xc001d08640) Stream added, broadcasting: 1 I0130 11:14:36.885053 8 log.go:172] (0xc000570580) Reply frame received for 1 I0130 11:14:36.885142 8 log.go:172] (0xc000570580) (0xc0019c8000) Create stream I0130 11:14:36.885158 8 log.go:172] (0xc000570580) (0xc0019c8000) Stream added, broadcasting: 3 I0130 11:14:36.886994 8 log.go:172] (0xc000570580) Reply frame received for 3 I0130 11:14:36.887035 8 log.go:172] (0xc000570580) (0xc000f4e460) Create stream I0130 11:14:36.887048 8 log.go:172] (0xc000570580) (0xc000f4e460) Stream added, broadcasting: 5 I0130 11:14:36.888402 8 log.go:172] (0xc000570580) Reply frame received for 5 I0130 11:14:37.004327 8 log.go:172] (0xc000570580) Data frame received for 3 I0130 11:14:37.004429 8 log.go:172] (0xc0019c8000) (3) Data frame handling I0130 11:14:37.004469 8 log.go:172] (0xc0019c8000) (3) Data frame sent I0130 11:14:37.175799 8 log.go:172] (0xc000570580) Data frame received for 1 I0130 11:14:37.175965 8 log.go:172] (0xc000570580) (0xc0019c8000) Stream removed, broadcasting: 3 I0130 11:14:37.176010 8 log.go:172] (0xc001d08640) (1) Data frame handling I0130 11:14:37.176039 8 log.go:172] (0xc000570580) (0xc000f4e460) Stream removed, broadcasting: 5 I0130 11:14:37.176145 8 log.go:172] (0xc001d08640) (1) Data frame sent I0130 11:14:37.176162 8 log.go:172] (0xc000570580) (0xc001d08640) Stream removed, broadcasting: 1 I0130 11:14:37.176176 8 log.go:172] (0xc000570580) Go away received I0130 11:14:37.176427 8 log.go:172] (0xc000570580) (0xc001d08640) Stream removed, broadcasting: 1 I0130 11:14:37.176442 8 log.go:172] (0xc000570580) (0xc0019c8000) Stream removed, broadcasting: 3 I0130 11:14:37.176453 8 log.go:172] (0xc000570580) (0xc000f4e460) Stream removed, broadcasting: 5 Jan 30 11:14:37.176: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 30 11:14:37.176: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5xxfk PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:14:37.176: INFO: >>> kubeConfig: /root/.kube/config I0130 11:14:37.273861 8 log.go:172] (0xc000ac9b80) (0xc002216fa0) Create stream I0130 11:14:37.274080 8 log.go:172] (0xc000ac9b80) (0xc002216fa0) Stream added, broadcasting: 1 I0130 11:14:37.285658 8 log.go:172] (0xc000ac9b80) Reply frame received for 1 I0130 11:14:37.285732 8 log.go:172] (0xc000ac9b80) (0xc000f4e5a0) Create stream I0130 11:14:37.285747 8 log.go:172] (0xc000ac9b80) (0xc000f4e5a0) Stream added, broadcasting: 3 I0130 11:14:37.291180 8 log.go:172] (0xc000ac9b80) Reply frame received for 3 I0130 11:14:37.291205 8 log.go:172] (0xc000ac9b80) (0xc0019c80a0) Create stream I0130 11:14:37.291215 8 log.go:172] (0xc000ac9b80) (0xc0019c80a0) Stream added, broadcasting: 5 I0130 11:14:37.297481 8 log.go:172] (0xc000ac9b80) Reply frame received for 5 I0130 11:14:37.500822 8 log.go:172] (0xc000ac9b80) Data frame received for 3 I0130 11:14:37.500925 8 log.go:172] (0xc000f4e5a0) (3) Data frame handling I0130 11:14:37.500960 8 log.go:172] (0xc000f4e5a0) (3) Data frame sent I0130 11:14:37.610715 8 log.go:172] (0xc000ac9b80) Data frame received for 1 I0130 11:14:37.610869 8 log.go:172] (0xc002216fa0) (1) Data frame handling I0130 11:14:37.610927 8 log.go:172] (0xc002216fa0) (1) Data frame sent I0130 11:14:37.611001 8 log.go:172] (0xc000ac9b80) (0xc002216fa0) Stream removed, broadcasting: 1 I0130 11:14:37.612664 8 log.go:172] (0xc000ac9b80) (0xc0019c80a0) Stream removed, broadcasting: 5 I0130 11:14:37.612835 8 log.go:172] (0xc000ac9b80) (0xc000f4e5a0) Stream removed, broadcasting: 3 I0130 11:14:37.612860 8 log.go:172] (0xc000ac9b80) Go away received I0130 11:14:37.612940 8 log.go:172] (0xc000ac9b80) (0xc002216fa0) Stream removed, broadcasting: 1 I0130 11:14:37.612966 8 log.go:172] (0xc000ac9b80) (0xc000f4e5a0) Stream removed, broadcasting: 3 I0130 11:14:37.612983 8 log.go:172] (0xc000ac9b80) (0xc0019c80a0) Stream removed, broadcasting: 5 Jan 30 11:14:37.613: INFO: Exec stderr: "" Jan 30 11:14:37.613: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5xxfk PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:14:37.613: INFO: >>> kubeConfig: /root/.kube/config I0130 11:14:37.681656 8 log.go:172] (0xc000570bb0) (0xc001d08820) Create stream I0130 11:14:37.681699 8 log.go:172] (0xc000570bb0) (0xc001d08820) Stream added, broadcasting: 1 I0130 11:14:37.686509 8 log.go:172] (0xc000570bb0) Reply frame received for 1 I0130 11:14:37.686541 8 log.go:172] (0xc000570bb0) (0xc000f4e640) Create stream I0130 11:14:37.686585 8 log.go:172] (0xc000570bb0) (0xc000f4e640) Stream added, broadcasting: 3 I0130 11:14:37.687420 8 log.go:172] (0xc000570bb0) Reply frame received for 3 I0130 11:14:37.687457 8 log.go:172] (0xc000570bb0) (0xc000f4e780) Create stream I0130 11:14:37.687471 8 log.go:172] (0xc000570bb0) (0xc000f4e780) Stream added, broadcasting: 5 I0130 11:14:37.688371 8 log.go:172] (0xc000570bb0) Reply frame received for 5 I0130 11:14:37.776276 8 log.go:172] (0xc000570bb0) Data frame received for 3 I0130 11:14:37.776331 8 log.go:172] (0xc000f4e640) (3) Data frame handling I0130 11:14:37.776355 8 log.go:172] (0xc000f4e640) (3) Data frame sent I0130 11:14:37.881538 8 log.go:172] (0xc000570bb0) Data frame received for 1 I0130 11:14:37.881714 8 log.go:172] (0xc001d08820) (1) Data frame handling I0130 11:14:37.881766 8 log.go:172] (0xc001d08820) (1) Data frame sent I0130 11:14:37.881826 8 log.go:172] (0xc000570bb0) (0xc001d08820) Stream removed, broadcasting: 1 I0130 11:14:37.882174 8 log.go:172] (0xc000570bb0) (0xc000f4e640) Stream removed, broadcasting: 3 I0130 11:14:37.882328 8 log.go:172] (0xc000570bb0) (0xc000f4e780) Stream removed, broadcasting: 5 I0130 11:14:37.882509 8 log.go:172] (0xc000570bb0) (0xc001d08820) Stream removed, broadcasting: 1 I0130 11:14:37.882542 8 log.go:172] (0xc000570bb0) (0xc000f4e640) Stream removed, broadcasting: 3 I0130 11:14:37.882597 8 log.go:172] (0xc000570bb0) (0xc000f4e780) Stream removed, broadcasting: 5 I0130 11:14:37.883079 8 log.go:172] (0xc000570bb0) Go away received Jan 30 11:14:37.883: INFO: Exec stderr: "" Jan 30 11:14:37.883: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5xxfk PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:14:37.883: INFO: >>> kubeConfig: /root/.kube/config I0130 11:14:37.962058 8 log.go:172] (0xc0022a2370) (0xc0019c83c0) Create stream I0130 11:14:37.962159 8 log.go:172] (0xc0022a2370) (0xc0019c83c0) Stream added, broadcasting: 1 I0130 11:14:37.966659 8 log.go:172] (0xc0022a2370) Reply frame received for 1 I0130 11:14:37.966715 8 log.go:172] (0xc0022a2370) (0xc000f4e820) Create stream I0130 11:14:37.966727 8 log.go:172] (0xc0022a2370) (0xc000f4e820) Stream added, broadcasting: 3 I0130 11:14:37.967682 8 log.go:172] (0xc0022a2370) Reply frame received for 3 I0130 11:14:37.967711 8 log.go:172] (0xc0022a2370) (0xc0022170e0) Create stream I0130 11:14:37.967735 8 log.go:172] (0xc0022a2370) (0xc0022170e0) Stream added, broadcasting: 5 I0130 11:14:37.968618 8 log.go:172] (0xc0022a2370) Reply frame received for 5 I0130 11:14:38.066662 8 log.go:172] (0xc0022a2370) Data frame received for 3 I0130 11:14:38.066719 8 log.go:172] (0xc000f4e820) (3) Data frame handling I0130 11:14:38.066750 8 log.go:172] (0xc000f4e820) (3) Data frame sent I0130 11:14:38.175478 8 log.go:172] (0xc0022a2370) Data frame received for 1 I0130 11:14:38.175567 8 log.go:172] (0xc0022a2370) (0xc000f4e820) Stream removed, broadcasting: 3 I0130 11:14:38.175613 8 log.go:172] (0xc0019c83c0) (1) Data frame handling I0130 11:14:38.175640 8 log.go:172] (0xc0019c83c0) (1) Data frame sent I0130 11:14:38.175699 8 log.go:172] (0xc0022a2370) (0xc0022170e0) Stream removed, broadcasting: 5 I0130 11:14:38.175746 8 log.go:172] (0xc0022a2370) (0xc0019c83c0) Stream removed, broadcasting: 1 I0130 11:14:38.175771 8 log.go:172] (0xc0022a2370) Go away received I0130 11:14:38.176022 8 log.go:172] (0xc0022a2370) (0xc0019c83c0) Stream removed, broadcasting: 1 I0130 11:14:38.176047 8 log.go:172] (0xc0022a2370) (0xc000f4e820) Stream removed, broadcasting: 3 I0130 11:14:38.176066 8 log.go:172] (0xc0022a2370) (0xc0022170e0) Stream removed, broadcasting: 5 Jan 30 11:14:38.176: INFO: Exec stderr: "" Jan 30 11:14:38.176: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5xxfk PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:14:38.176: INFO: >>> kubeConfig: /root/.kube/config I0130 11:14:38.251144 8 log.go:172] (0xc0000ead10) (0xc001e383c0) Create stream I0130 11:14:38.251218 8 log.go:172] (0xc0000ead10) (0xc001e383c0) Stream added, broadcasting: 1 I0130 11:14:38.258101 8 log.go:172] (0xc0000ead10) Reply frame received for 1 I0130 11:14:38.259182 8 log.go:172] (0xc0000ead10) (0xc001d088c0) Create stream I0130 11:14:38.259241 8 log.go:172] (0xc0000ead10) (0xc001d088c0) Stream added, broadcasting: 3 I0130 11:14:38.260967 8 log.go:172] (0xc0000ead10) Reply frame received for 3 I0130 11:14:38.261004 8 log.go:172] (0xc0000ead10) (0xc001e38460) Create stream I0130 11:14:38.261021 8 log.go:172] (0xc0000ead10) (0xc001e38460) Stream added, broadcasting: 5 I0130 11:14:38.262141 8 log.go:172] (0xc0000ead10) Reply frame received for 5 I0130 11:14:38.370410 8 log.go:172] (0xc0000ead10) Data frame received for 3 I0130 11:14:38.370495 8 log.go:172] (0xc001d088c0) (3) Data frame handling I0130 11:14:38.370523 8 log.go:172] (0xc001d088c0) (3) Data frame sent I0130 11:14:38.511111 8 log.go:172] (0xc0000ead10) Data frame received for 1 I0130 11:14:38.511354 8 log.go:172] (0xc0000ead10) (0xc001e38460) Stream removed, broadcasting: 5 I0130 11:14:38.511431 8 log.go:172] (0xc001e383c0) (1) Data frame handling I0130 11:14:38.511466 8 log.go:172] (0xc001e383c0) (1) Data frame sent I0130 11:14:38.511501 8 log.go:172] (0xc0000ead10) (0xc001d088c0) Stream removed, broadcasting: 3 I0130 11:14:38.511551 8 log.go:172] (0xc0000ead10) (0xc001e383c0) Stream removed, broadcasting: 1 I0130 11:14:38.511572 8 log.go:172] (0xc0000ead10) Go away received I0130 11:14:38.512259 8 log.go:172] (0xc0000ead10) (0xc001e383c0) Stream removed, broadcasting: 1 I0130 11:14:38.512284 8 log.go:172] (0xc0000ead10) (0xc001d088c0) Stream removed, broadcasting: 3 I0130 11:14:38.512296 8 log.go:172] (0xc0000ead10) (0xc001e38460) Stream removed, broadcasting: 5 Jan 30 11:14:38.512: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:14:38.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-5xxfk" for this suite. Jan 30 11:15:34.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:15:34.777: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-5xxfk, resource: bindings, ignored listing per whitelist Jan 30 11:15:34.800: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-5xxfk deletion completed in 56.272312653s • [SLOW TEST:86.905 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:15:34.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 30 11:15:35.106: INFO: Waiting up to 5m0s for pod "pod-d60510e9-4351-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-sl9tx" to be "success or failure" Jan 30 11:15:35.121: INFO: Pod "pod-d60510e9-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.337484ms Jan 30 11:15:37.252: INFO: Pod "pod-d60510e9-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146118334s Jan 30 11:15:39.288: INFO: Pod "pod-d60510e9-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18181434s Jan 30 11:15:41.559: INFO: Pod "pod-d60510e9-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452609837s Jan 30 11:15:44.094: INFO: Pod "pod-d60510e9-4351-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.987866499s Jan 30 11:15:46.105: INFO: Pod "pod-d60510e9-4351-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.999395288s STEP: Saw pod success Jan 30 11:15:46.105: INFO: Pod "pod-d60510e9-4351-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:15:46.115: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d60510e9-4351-11ea-a47a-0242ac110005 container test-container: STEP: delete the pod Jan 30 11:15:46.930: INFO: Waiting for pod pod-d60510e9-4351-11ea-a47a-0242ac110005 to disappear Jan 30 11:15:46.949: INFO: Pod pod-d60510e9-4351-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:15:46.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sl9tx" for this suite. Jan 30 11:15:53.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:15:53.180: INFO: namespace: e2e-tests-emptydir-sl9tx, resource: bindings, ignored listing per whitelist Jan 30 11:15:53.223: INFO: namespace e2e-tests-emptydir-sl9tx deletion completed in 6.212058172s • [SLOW TEST:18.424 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:15:53.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:16:03.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-ts9nj" for this suite. Jan 30 11:16:57.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:16:58.024: INFO: namespace: e2e-tests-kubelet-test-ts9nj, resource: bindings, ignored listing per whitelist Jan 30 11:16:58.078: INFO: namespace e2e-tests-kubelet-test-ts9nj deletion completed in 54.244477122s • [SLOW TEST:64.854 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:16:58.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:16:58.248: INFO: Creating deployment "nginx-deployment" Jan 30 11:16:58.257: INFO: Waiting for observed generation 1 Jan 30 11:17:00.306: INFO: Waiting for all required pods to come up Jan 30 11:17:00.338: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 30 11:17:39.155: INFO: Waiting for deployment "nginx-deployment" to complete Jan 30 11:17:39.164: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 30 11:17:39.178: INFO: Updating deployment nginx-deployment Jan 30 11:17:39.178: INFO: Waiting for observed generation 2 Jan 30 11:17:41.383: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 30 11:17:42.213: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 30 11:17:43.350: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 30 11:17:43.661: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 30 11:17:43.661: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 30 11:17:43.708: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 30 11:17:44.024: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 30 11:17:44.024: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 30 11:17:44.576: INFO: Updating deployment nginx-deployment Jan 30 11:17:44.577: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 30 11:17:44.953: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 30 11:17:47.358: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 30 11:17:47.380: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-87q4k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-87q4k/deployments/nginx-deployment,UID:079fdbc0-4352-11ea-a994-fa163e34d433,ResourceVersion:19961200,Generation:3,CreationTimestamp:2020-01-30 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-30 11:17:45 +0000 UTC 2020-01-30 11:17:45 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-30 11:17:46 +0000 UTC 2020-01-30 11:16:58 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 30 11:17:47.396: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-87q4k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-87q4k/replicasets/nginx-deployment-5c98f8fb5,UID:20055b7c-4352-11ea-a994-fa163e34d433,ResourceVersion:19961195,Generation:3,CreationTimestamp:2020-01-30 11:17:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 079fdbc0-4352-11ea-a994-fa163e34d433 0xc001f8e057 0xc001f8e058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 30 11:17:47.396: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 30 11:17:47.397: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-87q4k,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-87q4k/replicasets/nginx-deployment-85ddf47c5d,UID:07aa487b-4352-11ea-a994-fa163e34d433,ResourceVersion:19961194,Generation:3,CreationTimestamp:2020-01-30 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 079fdbc0-4352-11ea-a994-fa163e34d433 0xc001f8e117 0xc001f8e118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 30 11:17:48.486: INFO: Pod "nginx-deployment-5c98f8fb5-26b8b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-26b8b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-26b8b,UID:24120d3d-4352-11ea-a994-fa163e34d433,ResourceVersion:19961160,Generation:0,CreationTimestamp:2020-01-30 11:17:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc00223f8e7 0xc00223f8e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00223f950} {node.kubernetes.io/unreachable Exists NoExecute 0xc00223f970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.487: INFO: Pod "nginx-deployment-5c98f8fb5-5dxfx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5dxfx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-5dxfx,UID:2088673c-4352-11ea-a994-fa163e34d433,ResourceVersion:19961130,Generation:0,CreationTimestamp:2020-01-30 11:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc00223f9e7 0xc00223f9e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00223fa50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00223fa70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-30 11:17:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.487: INFO: Pod "nginx-deployment-5c98f8fb5-6xtz8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6xtz8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-6xtz8,UID:24207ed9-4352-11ea-a994-fa163e34d433,ResourceVersion:19961170,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc00223fb37 0xc00223fb38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00223fba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00223fbc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.487: INFO: Pod "nginx-deployment-5c98f8fb5-gwnvf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gwnvf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-gwnvf,UID:201d4a3a-4352-11ea-a994-fa163e34d433,ResourceVersion:19961126,Generation:0,CreationTimestamp:2020-01-30 11:17:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc00223fc37 0xc00223fc38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00223fca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00223fcc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-30 11:17:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.488: INFO: Pod "nginx-deployment-5c98f8fb5-jdmkm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jdmkm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-jdmkm,UID:242085e8-4352-11ea-a994-fa163e34d433,ResourceVersion:19961169,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc00223fd87 0xc00223fd88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00223fdf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00223fe10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.488: INFO: Pod "nginx-deployment-5c98f8fb5-k574l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k574l,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-k574l,UID:23f2d256-4352-11ea-a994-fa163e34d433,ResourceVersion:19961201,Generation:0,CreationTimestamp:2020-01-30 11:17:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc00223fe87 0xc00223fe88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00223fef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00223ff10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:45 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-30 11:17:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.489: INFO: Pod "nginx-deployment-5c98f8fb5-lxjts" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lxjts,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-lxjts,UID:201bdf9c-4352-11ea-a994-fa163e34d433,ResourceVersion:19961123,Generation:0,CreationTimestamp:2020-01-30 11:17:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc00223ffd7 0xc00223ffd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebc040} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebc060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-30 11:17:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.489: INFO: Pod "nginx-deployment-5c98f8fb5-m9h48" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-m9h48,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-m9h48,UID:2420974a-4352-11ea-a994-fa163e34d433,ResourceVersion:19961168,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc001ebc1d7 0xc001ebc1d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebc240} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebc260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.490: INFO: Pod "nginx-deployment-5c98f8fb5-n9hwq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n9hwq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-n9hwq,UID:2019f53d-4352-11ea-a994-fa163e34d433,ResourceVersion:19961100,Generation:0,CreationTimestamp:2020-01-30 11:17:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc001ebc2d7 0xc001ebc2d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebc3b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebc3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-30 11:17:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.490: INFO: Pod "nginx-deployment-5c98f8fb5-npvvd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-npvvd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-npvvd,UID:2440ad29-4352-11ea-a994-fa163e34d433,ResourceVersion:19961188,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc001ebc497 0xc001ebc498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebc5b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebc5d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.490: INFO: Pod "nginx-deployment-5c98f8fb5-nzsk5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nzsk5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-nzsk5,UID:2411d347-4352-11ea-a994-fa163e34d433,ResourceVersion:19961154,Generation:0,CreationTimestamp:2020-01-30 11:17:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc001ebc647 0xc001ebc648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebc6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebc6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.490: INFO: Pod "nginx-deployment-5c98f8fb5-pvgm5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pvgm5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-pvgm5,UID:2084ad73-4352-11ea-a994-fa163e34d433,ResourceVersion:19961127,Generation:0,CreationTimestamp:2020-01-30 11:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc001ebc827 0xc001ebc828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebc890} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebc8b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-30 11:17:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.491: INFO: Pod "nginx-deployment-5c98f8fb5-rsfzq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rsfzq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-5c98f8fb5-rsfzq,UID:24206faa-4352-11ea-a994-fa163e34d433,ResourceVersion:19961171,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20055b7c-4352-11ea-a994-fa163e34d433 0xc001ebca37 0xc001ebca38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebcaa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebcac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.491: INFO: Pod "nginx-deployment-85ddf47c5d-4q9rn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4q9rn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-4q9rn,UID:07caf4ba-4352-11ea-a994-fa163e34d433,ResourceVersion:19961043,Generation:0,CreationTimestamp:2020-01-30 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebcb37 0xc001ebcb38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebcc10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebcc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-30 11:16:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 11:17:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://486b25fc5909b6ec4179bb1989e73c592e5969a2f20548f9ff258faac059398b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.491: INFO: Pod "nginx-deployment-85ddf47c5d-5fmrc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5fmrc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-5fmrc,UID:07b9cf79-4352-11ea-a994-fa163e34d433,ResourceVersion:19961049,Generation:0,CreationTimestamp:2020-01-30 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebccf7 0xc001ebccf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebcd60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebcd80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-30 11:16:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 11:17:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f1ec614e831e8f25aa124a9497bb8d3ae56affa59e40ca5caa20fdf29d2d4752}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.492: INFO: Pod "nginx-deployment-85ddf47c5d-5gczg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5gczg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-5gczg,UID:244118f1-4352-11ea-a994-fa163e34d433,ResourceVersion:19961190,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebce47 0xc001ebce48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebceb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebced0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.492: INFO: Pod "nginx-deployment-85ddf47c5d-6bn6f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6bn6f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-6bn6f,UID:07e1e930-4352-11ea-a994-fa163e34d433,ResourceVersion:19961046,Generation:0,CreationTimestamp:2020-01-30 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebcf47 0xc001ebcf48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebcfb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebcfd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-30 11:16:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 11:17:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2ef130cbc80078ad063d7cf3f18b492de2d1e3d3be58f74a4b1af5992d0844c4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.493: INFO: Pod "nginx-deployment-85ddf47c5d-bxc2p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bxc2p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-bxc2p,UID:244321c4-4352-11ea-a994-fa163e34d433,ResourceVersion:19961187,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebd097 0xc001ebd098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebd100} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebd120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.493: INFO: Pod "nginx-deployment-85ddf47c5d-c4c28" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c4c28,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-c4c28,UID:2425375b-4352-11ea-a994-fa163e34d433,ResourceVersion:19961178,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebd197 0xc001ebd198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebd200} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebd220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.493: INFO: Pod "nginx-deployment-85ddf47c5d-dchd9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dchd9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-dchd9,UID:23f2db26-4352-11ea-a994-fa163e34d433,ResourceVersion:19961192,Generation:0,CreationTimestamp:2020-01-30 11:17:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebd297 0xc001ebd298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebd300} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebd540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:45 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-30 11:17:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.493: INFO: Pod "nginx-deployment-85ddf47c5d-ds4pv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ds4pv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-ds4pv,UID:07cad361-4352-11ea-a994-fa163e34d433,ResourceVersion:19961069,Generation:0,CreationTimestamp:2020-01-30 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebd5f7 0xc001ebd5f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebd660} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebd680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-30 11:16:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 11:17:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8a45d1ef3ba9b9be72947096365c0c503801ac6941a2c7dd9102182ceb0ef739}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.494: INFO: Pod "nginx-deployment-85ddf47c5d-fh89j" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fh89j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-fh89j,UID:07ca80c8-4352-11ea-a994-fa163e34d433,ResourceVersion:19961057,Generation:0,CreationTimestamp:2020-01-30 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebd747 0xc001ebd748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebd7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebd7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-30 11:16:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 11:17:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8da0f337df1ec161bc73b626d2f9ca31d6d65cf3ace318d12e054d81f5020d82}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.494: INFO: Pod "nginx-deployment-85ddf47c5d-g5jxn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g5jxn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-g5jxn,UID:244346d1-4352-11ea-a994-fa163e34d433,ResourceVersion:19961191,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebd897 0xc001ebd898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebd900} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebd920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.495: INFO: Pod "nginx-deployment-85ddf47c5d-jjs8q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jjs8q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-jjs8q,UID:07e1a748-4352-11ea-a994-fa163e34d433,ResourceVersion:19961032,Generation:0,CreationTimestamp:2020-01-30 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebd997 0xc001ebd998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebda00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebda20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-30 11:16:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 11:17:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4e81b55517dad0d073d2ffcb004ef11e495f32f4c3f0e65ea32199894dfcc0ce}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.495: INFO: Pod "nginx-deployment-85ddf47c5d-mmrz6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mmrz6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-mmrz6,UID:07e1e7a5-4352-11ea-a994-fa163e34d433,ResourceVersion:19961060,Generation:0,CreationTimestamp:2020-01-30 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebdae7 0xc001ebdae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebdb50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebdb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-30 11:16:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 11:17:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://04ca0613f937987dd49a6124ca414356e5b9c7702237b22dbd526ee9128c87af}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.496: INFO: Pod "nginx-deployment-85ddf47c5d-ph7pv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ph7pv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-ph7pv,UID:24430cfb-4352-11ea-a994-fa163e34d433,ResourceVersion:19961186,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebdc37 0xc001ebdc38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebdca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebdcc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.496: INFO: Pod "nginx-deployment-85ddf47c5d-pmlf2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pmlf2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-pmlf2,UID:24433509-4352-11ea-a994-fa163e34d433,ResourceVersion:19961189,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebdd37 0xc001ebdd38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebdda0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebddc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.496: INFO: Pod "nginx-deployment-85ddf47c5d-rn6xk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rn6xk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-rn6xk,UID:2424e2b8-4352-11ea-a994-fa163e34d433,ResourceVersion:19961175,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebde37 0xc001ebde38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebdea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebdec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.496: INFO: Pod "nginx-deployment-85ddf47c5d-rrr5j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rrr5j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-rrr5j,UID:2412fe83-4352-11ea-a994-fa163e34d433,ResourceVersion:19961156,Generation:0,CreationTimestamp:2020-01-30 11:17:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc001ebdf37 0xc001ebdf38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebdfa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebdfc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.497: INFO: Pod "nginx-deployment-85ddf47c5d-sfcgw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sfcgw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-sfcgw,UID:07b5a80b-4352-11ea-a994-fa163e34d433,ResourceVersion:19961065,Generation:0,CreationTimestamp:2020-01-30 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc00135e037 0xc00135e038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135e0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135e0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:16:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-30 11:16:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 11:17:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://241a2cf4f47f9f070057f0ad3e669e908ae25352e36178c2708d69ef753b738d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.497: INFO: Pod "nginx-deployment-85ddf47c5d-vbfk7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vbfk7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-vbfk7,UID:2424f97a-4352-11ea-a994-fa163e34d433,ResourceVersion:19961174,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc00135e187 0xc00135e188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135e1f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135e210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.497: INFO: Pod "nginx-deployment-85ddf47c5d-zd27z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zd27z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-zd27z,UID:2425778d-4352-11ea-a994-fa163e34d433,ResourceVersion:19961180,Generation:0,CreationTimestamp:2020-01-30 11:17:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc00135e287 0xc00135e288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135e2f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135e310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 11:17:48.498: INFO: Pod "nginx-deployment-85ddf47c5d-zsgj4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zsgj4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-87q4k,SelfLink:/api/v1/namespaces/e2e-tests-deployment-87q4k/pods/nginx-deployment-85ddf47c5d-zsgj4,UID:2412d89d-4352-11ea-a994-fa163e34d433,ResourceVersion:19961155,Generation:0,CreationTimestamp:2020-01-30 11:17:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 07aa487b-4352-11ea-a994-fa163e34d433 0xc00135e387 0xc00135e388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldqxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldqxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ldqxj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135e3f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135e410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:17:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:17:48.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-87q4k" for this suite. Jan 30 11:18:40.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:18:40.966: INFO: namespace: e2e-tests-deployment-87q4k, resource: bindings, ignored listing per whitelist Jan 30 11:18:41.826: INFO: namespace e2e-tests-deployment-87q4k deletion completed in 51.720028125s • [SLOW TEST:103.748 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:18:41.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jan 30 11:18:43.542: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix488956406/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:18:43.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-74tbd" for this suite. Jan 30 11:18:52.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:18:53.070: INFO: namespace: e2e-tests-kubectl-74tbd, resource: bindings, ignored listing per whitelist Jan 30 11:18:53.084: INFO: namespace e2e-tests-kubectl-74tbd deletion completed in 9.132576097s • [SLOW TEST:11.258 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:18:53.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-4c89d8c8-4352-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 30 11:18:55.913: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-b59bp" to be "success or failure" Jan 30 11:18:55.927: INFO: Pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.394966ms Jan 30 11:18:58.134: INFO: Pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221346957s Jan 30 11:19:00.179: INFO: Pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265438473s Jan 30 11:19:02.203: INFO: Pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.289756229s Jan 30 11:19:04.692: INFO: Pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.778582304s Jan 30 11:19:07.030: INFO: Pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.117262407s Jan 30 11:19:09.064: INFO: Pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.151341905s Jan 30 11:19:11.084: INFO: Pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.170777667s Jan 30 11:19:13.120: INFO: Pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.20700927s STEP: Saw pod success Jan 30 11:19:13.120: INFO: Pod "pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:19:13.128: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 30 11:19:13.368: INFO: Waiting for pod pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005 to disappear Jan 30 11:19:13.377: INFO: Pod pod-projected-configmaps-4ccaff8a-4352-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:19:13.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b59bp" for this suite. Jan 30 11:19:21.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:19:21.551: INFO: namespace: e2e-tests-projected-b59bp, resource: bindings, ignored listing per whitelist Jan 30 11:19:21.697: INFO: namespace e2e-tests-projected-b59bp deletion completed in 8.312026533s • [SLOW TEST:28.611 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:19:21.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 30 11:19:21.983: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q5vhm,SelfLink:/api/v1/namespaces/e2e-tests-watch-q5vhm/configmaps/e2e-watch-test-watch-closed,UID:5d4963a3-4352-11ea-a994-fa163e34d433,ResourceVersion:19961519,Generation:0,CreationTimestamp:2020-01-30 11:19:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 30 11:19:21.983: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q5vhm,SelfLink:/api/v1/namespaces/e2e-tests-watch-q5vhm/configmaps/e2e-watch-test-watch-closed,UID:5d4963a3-4352-11ea-a994-fa163e34d433,ResourceVersion:19961520,Generation:0,CreationTimestamp:2020-01-30 11:19:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 30 11:19:22.055: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q5vhm,SelfLink:/api/v1/namespaces/e2e-tests-watch-q5vhm/configmaps/e2e-watch-test-watch-closed,UID:5d4963a3-4352-11ea-a994-fa163e34d433,ResourceVersion:19961521,Generation:0,CreationTimestamp:2020-01-30 11:19:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 30 11:19:22.056: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q5vhm,SelfLink:/api/v1/namespaces/e2e-tests-watch-q5vhm/configmaps/e2e-watch-test-watch-closed,UID:5d4963a3-4352-11ea-a994-fa163e34d433,ResourceVersion:19961522,Generation:0,CreationTimestamp:2020-01-30 11:19:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:19:22.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-q5vhm" for this suite. Jan 30 11:19:28.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:19:28.218: INFO: namespace: e2e-tests-watch-q5vhm, resource: bindings, ignored listing per whitelist Jan 30 11:19:28.286: INFO: namespace e2e-tests-watch-q5vhm deletion completed in 6.217944885s • [SLOW TEST:6.588 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:19:28.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0130 11:19:59.363500 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 11:19:59.363: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:19:59.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-vdgmg" for this suite. Jan 30 11:20:05.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:20:07.087: INFO: namespace: e2e-tests-gc-vdgmg, resource: bindings, ignored listing per whitelist Jan 30 11:20:07.147: INFO: namespace e2e-tests-gc-vdgmg deletion completed in 7.699604213s • [SLOW TEST:38.860 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:20:07.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 30 11:20:07.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-wg665" to be "success or failure" Jan 30 11:20:07.518: INFO: Pod "downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.392685ms Jan 30 11:20:09.686: INFO: Pod "downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198789354s Jan 30 11:20:11.727: INFO: Pod "downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239650015s Jan 30 11:20:13.745: INFO: Pod "downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257294714s Jan 30 11:20:15.766: INFO: Pod "downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278627879s Jan 30 11:20:17.781: INFO: Pod "downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.293048953s Jan 30 11:20:20.117: INFO: Pod "downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.629021688s STEP: Saw pod success Jan 30 11:20:20.117: INFO: Pod "downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:20:20.124: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005 container client-container: STEP: delete the pod Jan 30 11:20:20.481: INFO: Waiting for pod downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005 to disappear Jan 30 11:20:20.529: INFO: Pod downwardapi-volume-78671a1b-4352-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:20:20.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wg665" for this suite. Jan 30 11:20:26.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:20:26.697: INFO: namespace: e2e-tests-downward-api-wg665, resource: bindings, ignored listing per whitelist Jan 30 11:20:26.772: INFO: namespace e2e-tests-downward-api-wg665 deletion completed in 6.217221686s • [SLOW TEST:19.624 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:20:26.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 30 11:20:27.226: INFO: Waiting up to 5m0s for pod "pod-842cc5bc-4352-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-w4ghc" to be "success or failure" Jan 30 11:20:27.274: INFO: Pod "pod-842cc5bc-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.892035ms Jan 30 11:20:29.537: INFO: Pod "pod-842cc5bc-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310095443s Jan 30 11:20:31.564: INFO: Pod "pod-842cc5bc-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336940084s Jan 30 11:20:33.602: INFO: Pod "pod-842cc5bc-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375559571s Jan 30 11:20:35.623: INFO: Pod "pod-842cc5bc-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.396336619s Jan 30 11:20:38.136: INFO: Pod "pod-842cc5bc-4352-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.909499196s STEP: Saw pod success Jan 30 11:20:38.137: INFO: Pod "pod-842cc5bc-4352-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:20:38.145: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-842cc5bc-4352-11ea-a47a-0242ac110005 container test-container: STEP: delete the pod Jan 30 11:20:38.540: INFO: Waiting for pod pod-842cc5bc-4352-11ea-a47a-0242ac110005 to disappear Jan 30 11:20:38.568: INFO: Pod pod-842cc5bc-4352-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:20:38.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-w4ghc" for this suite. Jan 30 11:20:44.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:20:44.832: INFO: namespace: e2e-tests-emptydir-w4ghc, resource: bindings, ignored listing per whitelist Jan 30 11:20:44.917: INFO: namespace e2e-tests-emptydir-w4ghc deletion completed in 6.303273527s • [SLOW TEST:18.145 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:20:44.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 30 11:20:45.116: INFO: Waiting up to 5m0s for pod "downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-tbght" to be "success or failure" Jan 30 11:20:45.131: INFO: Pod "downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.83539ms Jan 30 11:20:47.151: INFO: Pod "downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034008945s Jan 30 11:20:49.175: INFO: Pod "downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058565949s Jan 30 11:20:51.452: INFO: Pod "downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335380316s Jan 30 11:20:53.470: INFO: Pod "downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.353320403s Jan 30 11:20:55.483: INFO: Pod "downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.366562031s STEP: Saw pod success Jan 30 11:20:55.483: INFO: Pod "downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:20:55.487: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005 container dapi-container: STEP: delete the pod Jan 30 11:20:55.604: INFO: Waiting for pod downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005 to disappear Jan 30 11:20:56.284: INFO: Pod downward-api-8ed65b0c-4352-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:20:56.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tbght" for this suite. Jan 30 11:21:02.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:21:02.905: INFO: namespace: e2e-tests-downward-api-tbght, resource: bindings, ignored listing per whitelist Jan 30 11:21:02.977: INFO: namespace e2e-tests-downward-api-tbght deletion completed in 6.660645002s • [SLOW TEST:18.059 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:21:02.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 30 11:21:03.151: INFO: Waiting up to 5m0s for pod "pod-99911d3d-4352-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-22x87" to be "success or failure" Jan 30 11:21:03.163: INFO: Pod "pod-99911d3d-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.385856ms Jan 30 11:21:05.196: INFO: Pod "pod-99911d3d-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044866535s Jan 30 11:21:07.211: INFO: Pod "pod-99911d3d-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05994192s Jan 30 11:21:09.235: INFO: Pod "pod-99911d3d-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083831519s Jan 30 11:21:11.278: INFO: Pod "pod-99911d3d-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127170325s Jan 30 11:21:13.297: INFO: Pod "pod-99911d3d-4352-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.14592791s STEP: Saw pod success Jan 30 11:21:13.297: INFO: Pod "pod-99911d3d-4352-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:21:13.309: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-99911d3d-4352-11ea-a47a-0242ac110005 container test-container: STEP: delete the pod Jan 30 11:21:14.386: INFO: Waiting for pod pod-99911d3d-4352-11ea-a47a-0242ac110005 to disappear Jan 30 11:21:14.550: INFO: Pod pod-99911d3d-4352-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:21:14.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-22x87" for this suite. Jan 30 11:21:20.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:21:20.794: INFO: namespace: e2e-tests-emptydir-22x87, resource: bindings, ignored listing per whitelist Jan 30 11:21:20.866: INFO: namespace e2e-tests-emptydir-22x87 deletion completed in 6.275600609s • [SLOW TEST:17.889 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:21:20.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-a4443942-4352-11ea-a47a-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-a4443942-4352-11ea-a47a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:21:35.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8mmz9" for this suite. Jan 30 11:21:59.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:21:59.681: INFO: namespace: e2e-tests-projected-8mmz9, resource: bindings, ignored listing per whitelist Jan 30 11:21:59.846: INFO: namespace e2e-tests-projected-8mmz9 deletion completed in 24.244597086s • [SLOW TEST:38.979 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:21:59.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-bbdb7148-4352-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 30 11:22:01.505: INFO: Waiting up to 5m0s for pod "pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005" in namespace "e2e-tests-secrets-db5q2" to be "success or failure" Jan 30 11:22:01.520: INFO: Pod "pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.507964ms Jan 30 11:22:03.782: INFO: Pod "pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276922906s Jan 30 11:22:05.817: INFO: Pod "pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31242176s Jan 30 11:22:07.840: INFO: Pod "pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335438857s Jan 30 11:22:09.868: INFO: Pod "pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363045572s Jan 30 11:22:11.905: INFO: Pod "pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.400493061s STEP: Saw pod success Jan 30 11:22:11.906: INFO: Pod "pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:22:11.919: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 30 11:22:12.873: INFO: Waiting for pod pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005 to disappear Jan 30 11:22:13.016: INFO: Pod pod-secrets-bc5ccacf-4352-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:22:13.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-db5q2" for this suite. Jan 30 11:22:19.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:22:19.336: INFO: namespace: e2e-tests-secrets-db5q2, resource: bindings, ignored listing per whitelist Jan 30 11:22:19.368: INFO: namespace e2e-tests-secrets-db5q2 deletion completed in 6.297004817s STEP: Destroying namespace "e2e-tests-secret-namespace-z8bhb" for this suite. Jan 30 11:22:25.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:22:25.504: INFO: namespace: e2e-tests-secret-namespace-z8bhb, resource: bindings, ignored listing per whitelist Jan 30 11:22:25.601: INFO: namespace e2e-tests-secret-namespace-z8bhb deletion completed in 6.233012891s • [SLOW TEST:25.754 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:22:25.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:22:25.781: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:22:36.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wknwh" for this suite. Jan 30 11:23:18.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:23:18.267: INFO: namespace: e2e-tests-pods-wknwh, resource: bindings, ignored listing per whitelist Jan 30 11:23:18.312: INFO: namespace e2e-tests-pods-wknwh deletion completed in 42.207835833s • [SLOW TEST:52.711 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:23:18.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 30 11:23:18.603: INFO: Waiting up to 5m0s for pod "pod-ea4a2727-4352-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-n5qhc" to be "success or failure" Jan 30 11:23:18.709: INFO: Pod "pod-ea4a2727-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 105.002288ms Jan 30 11:23:20.739: INFO: Pod "pod-ea4a2727-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135227064s Jan 30 11:23:22.754: INFO: Pod "pod-ea4a2727-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150262743s Jan 30 11:23:24.796: INFO: Pod "pod-ea4a2727-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192472348s Jan 30 11:23:26.892: INFO: Pod "pod-ea4a2727-4352-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.288559028s Jan 30 11:23:28.907: INFO: Pod "pod-ea4a2727-4352-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.303750599s STEP: Saw pod success Jan 30 11:23:28.908: INFO: Pod "pod-ea4a2727-4352-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:23:28.913: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ea4a2727-4352-11ea-a47a-0242ac110005 container test-container: STEP: delete the pod Jan 30 11:23:29.009: INFO: Waiting for pod pod-ea4a2727-4352-11ea-a47a-0242ac110005 to disappear Jan 30 11:23:29.122: INFO: Pod pod-ea4a2727-4352-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:23:29.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n5qhc" for this suite. Jan 30 11:23:35.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:23:35.241: INFO: namespace: e2e-tests-emptydir-n5qhc, resource: bindings, ignored listing per whitelist Jan 30 11:23:35.345: INFO: namespace e2e-tests-emptydir-n5qhc deletion completed in 6.211665123s • [SLOW TEST:17.033 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:23:35.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-kntf STEP: Creating a pod to test atomic-volume-subpath Jan 30 11:23:35.558: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kntf" in namespace "e2e-tests-subpath-rcrl4" to be "success or failure" Jan 30 11:23:35.595: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Pending", Reason="", readiness=false. Elapsed: 36.553837ms Jan 30 11:23:37.622: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063922396s Jan 30 11:23:39.634: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075478802s Jan 30 11:23:41.919: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360525056s Jan 30 11:23:43.960: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.401445707s Jan 30 11:23:46.401: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.842717247s Jan 30 11:23:48.628: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.069458627s Jan 30 11:23:50.659: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.101157666s Jan 30 11:23:52.676: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Running", Reason="", readiness=false. Elapsed: 17.118172857s Jan 30 11:23:54.698: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Running", Reason="", readiness=false. Elapsed: 19.139730629s Jan 30 11:23:56.717: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Running", Reason="", readiness=false. Elapsed: 21.158821233s Jan 30 11:23:58.755: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Running", Reason="", readiness=false. Elapsed: 23.196721054s Jan 30 11:24:00.774: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Running", Reason="", readiness=false. Elapsed: 25.215191739s Jan 30 11:24:02.795: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Running", Reason="", readiness=false. Elapsed: 27.237056675s Jan 30 11:24:04.807: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Running", Reason="", readiness=false. Elapsed: 29.24886142s Jan 30 11:24:06.831: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Running", Reason="", readiness=false. Elapsed: 31.273041643s Jan 30 11:24:09.069: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Running", Reason="", readiness=false. Elapsed: 33.511036245s Jan 30 11:24:11.086: INFO: Pod "pod-subpath-test-secret-kntf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.527696728s STEP: Saw pod success Jan 30 11:24:11.086: INFO: Pod "pod-subpath-test-secret-kntf" satisfied condition "success or failure" Jan 30 11:24:11.097: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-kntf container test-container-subpath-secret-kntf: STEP: delete the pod Jan 30 11:24:11.827: INFO: Waiting for pod pod-subpath-test-secret-kntf to disappear Jan 30 11:24:12.321: INFO: Pod pod-subpath-test-secret-kntf no longer exists STEP: Deleting pod pod-subpath-test-secret-kntf Jan 30 11:24:12.321: INFO: Deleting pod "pod-subpath-test-secret-kntf" in namespace "e2e-tests-subpath-rcrl4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:24:12.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-rcrl4" for this suite. Jan 30 11:24:18.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:24:18.506: INFO: namespace: e2e-tests-subpath-rcrl4, resource: bindings, ignored listing per whitelist Jan 30 11:24:18.619: INFO: namespace e2e-tests-subpath-rcrl4 deletion completed in 6.276134335s • [SLOW TEST:43.273 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:24:18.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:24:18.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 30 11:24:19.047: INFO: stderr: "" Jan 30 11:24:19.047: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:24:19.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-85wvt" for this suite. Jan 30 11:24:25.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:24:25.194: INFO: namespace: e2e-tests-kubectl-85wvt, resource: bindings, ignored listing per whitelist Jan 30 11:24:25.410: INFO: namespace e2e-tests-kubectl-85wvt deletion completed in 6.350737834s • [SLOW TEST:6.791 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:24:25.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-96z4 STEP: Creating a pod to test atomic-volume-subpath Jan 30 11:24:25.708: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-96z4" in namespace "e2e-tests-subpath-hnfd8" to be "success or failure" Jan 30 11:24:25.909: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Pending", Reason="", readiness=false. Elapsed: 201.300754ms Jan 30 11:24:27.926: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217784501s Jan 30 11:24:29.942: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233410777s Jan 30 11:24:32.667: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.958555788s Jan 30 11:24:34.703: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.994647318s Jan 30 11:24:36.741: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.032565463s Jan 30 11:24:38.772: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.06348996s Jan 30 11:24:40.787: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.078824373s Jan 30 11:24:42.801: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.093107292s Jan 30 11:24:44.810: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Running", Reason="", readiness=false. Elapsed: 19.101739239s Jan 30 11:24:46.832: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Running", Reason="", readiness=false. Elapsed: 21.124020444s Jan 30 11:24:48.865: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Running", Reason="", readiness=false. Elapsed: 23.156779792s Jan 30 11:24:50.884: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Running", Reason="", readiness=false. Elapsed: 25.176009208s Jan 30 11:24:52.900: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Running", Reason="", readiness=false. Elapsed: 27.192019923s Jan 30 11:24:54.914: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Running", Reason="", readiness=false. Elapsed: 29.206086322s Jan 30 11:24:56.929: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Running", Reason="", readiness=false. Elapsed: 31.220360416s Jan 30 11:24:58.948: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Running", Reason="", readiness=false. Elapsed: 33.239887634s Jan 30 11:25:00.976: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Running", Reason="", readiness=false. Elapsed: 35.268170439s Jan 30 11:25:02.992: INFO: Pod "pod-subpath-test-projected-96z4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.283941065s STEP: Saw pod success Jan 30 11:25:02.992: INFO: Pod "pod-subpath-test-projected-96z4" satisfied condition "success or failure" Jan 30 11:25:02.998: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-96z4 container test-container-subpath-projected-96z4: STEP: delete the pod Jan 30 11:25:03.704: INFO: Waiting for pod pod-subpath-test-projected-96z4 to disappear Jan 30 11:25:03.740: INFO: Pod pod-subpath-test-projected-96z4 no longer exists STEP: Deleting pod pod-subpath-test-projected-96z4 Jan 30 11:25:03.740: INFO: Deleting pod "pod-subpath-test-projected-96z4" in namespace "e2e-tests-subpath-hnfd8" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:25:03.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-hnfd8" for this suite. Jan 30 11:25:09.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:25:10.095: INFO: namespace: e2e-tests-subpath-hnfd8, resource: bindings, ignored listing per whitelist Jan 30 11:25:10.137: INFO: namespace e2e-tests-subpath-hnfd8 deletion completed in 6.377270503s • [SLOW TEST:44.726 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:25:10.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 30 11:25:20.803: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-2d119426-4353-11ea-a47a-0242ac110005,GenerateName:,Namespace:e2e-tests-events-kpwn6,SelfLink:/api/v1/namespaces/e2e-tests-events-kpwn6/pods/send-events-2d119426-4353-11ea-a47a-0242ac110005,UID:2d175868-4353-11ea-a994-fa163e34d433,ResourceVersion:19962314,Generation:0,CreationTimestamp:2020-01-30 11:25:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 569531063,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hzzff {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hzzff,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-hzzff true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002420f60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002420f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:25:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:25:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:25:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:25:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-30 11:25:10 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-30 11:25:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://80681afe76a2fbe4d999eeb55d388c170bef9f66d1449fe746ef8f7648f856f9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 30 11:25:22.825: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 30 11:25:24.847: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:25:24.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-kpwn6" for this suite. Jan 30 11:26:05.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:26:05.289: INFO: namespace: e2e-tests-events-kpwn6, resource: bindings, ignored listing per whitelist Jan 30 11:26:05.300: INFO: namespace e2e-tests-events-kpwn6 deletion completed in 40.350685919s • [SLOW TEST:55.163 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:26:05.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:26:15.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-k77dx" for this suite. Jan 30 11:27:05.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:27:05.879: INFO: namespace: e2e-tests-kubelet-test-k77dx, resource: bindings, ignored listing per whitelist Jan 30 11:27:06.008: INFO: namespace e2e-tests-kubelet-test-k77dx deletion completed in 50.238333665s • [SLOW TEST:60.707 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:27:06.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jan 30 11:27:16.258: INFO: Pod pod-hostip-71fd1211-4353-11ea-a47a-0242ac110005 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:27:16.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-k2r66" for this suite. Jan 30 11:27:40.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:27:40.424: INFO: namespace: e2e-tests-pods-k2r66, resource: bindings, ignored listing per whitelist Jan 30 11:27:40.638: INFO: namespace e2e-tests-pods-k2r66 deletion completed in 24.37091865s • [SLOW TEST:34.630 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:27:40.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:27:50.984: INFO: Waiting up to 5m0s for pod "client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005" in namespace "e2e-tests-pods-hqdf5" to be "success or failure" Jan 30 11:27:50.998: INFO: Pod "client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.350761ms Jan 30 11:27:53.020: INFO: Pod "client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035854985s Jan 30 11:27:55.034: INFO: Pod "client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049829711s Jan 30 11:27:57.855: INFO: Pod "client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.870677623s Jan 30 11:27:59.918: INFO: Pod "client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.933885483s Jan 30 11:28:01.969: INFO: Pod "client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.984623909s STEP: Saw pod success Jan 30 11:28:01.969: INFO: Pod "client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:28:02.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005 container env3cont: STEP: delete the pod Jan 30 11:28:02.227: INFO: Waiting for pod client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005 to disappear Jan 30 11:28:02.463: INFO: Pod client-envvars-8cacfb94-4353-11ea-a47a-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:28:02.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hqdf5" for this suite. Jan 30 11:28:57.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:28:57.350: INFO: namespace: e2e-tests-pods-hqdf5, resource: bindings, ignored listing per whitelist Jan 30 11:28:57.383: INFO: namespace e2e-tests-pods-hqdf5 deletion completed in 54.889412472s • [SLOW TEST:76.744 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:28:57.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 30 11:29:08.295: INFO: Successfully updated pod "annotationupdateb460e89c-4353-11ea-a47a-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:29:10.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h5kfj" for this suite. Jan 30 11:29:34.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:29:34.744: INFO: namespace: e2e-tests-downward-api-h5kfj, resource: bindings, ignored listing per whitelist Jan 30 11:29:34.802: INFO: namespace e2e-tests-downward-api-h5kfj deletion completed in 24.36118466s • [SLOW TEST:37.419 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:29:34.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-caa6b7b2-4353-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 30 11:29:35.222: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-lpfm6" to be "success or failure" Jan 30 11:29:35.256: INFO: Pod "pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.701146ms Jan 30 11:29:37.271: INFO: Pod "pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049091798s Jan 30 11:29:39.288: INFO: Pod "pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066225573s Jan 30 11:29:41.303: INFO: Pod "pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080623511s Jan 30 11:29:43.331: INFO: Pod "pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10870899s Jan 30 11:29:45.343: INFO: Pod "pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121248014s STEP: Saw pod success Jan 30 11:29:45.343: INFO: Pod "pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:29:45.355: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 30 11:29:46.521: INFO: Waiting for pod pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005 to disappear Jan 30 11:29:46.671: INFO: Pod pod-projected-secrets-cacd8884-4353-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:29:46.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lpfm6" for this suite. Jan 30 11:29:52.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:29:52.999: INFO: namespace: e2e-tests-projected-lpfm6, resource: bindings, ignored listing per whitelist Jan 30 11:29:53.082: INFO: namespace e2e-tests-projected-lpfm6 deletion completed in 6.38519146s • [SLOW TEST:18.280 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:29:53.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jan 30 11:29:53.334: INFO: Waiting up to 5m0s for pod "client-containers-d5993740-4353-11ea-a47a-0242ac110005" in namespace "e2e-tests-containers-jt62m" to be "success or failure" Jan 30 11:29:53.428: INFO: Pod "client-containers-d5993740-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 93.413364ms Jan 30 11:29:55.443: INFO: Pod "client-containers-d5993740-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108920452s Jan 30 11:29:57.457: INFO: Pod "client-containers-d5993740-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122090828s Jan 30 11:29:59.684: INFO: Pod "client-containers-d5993740-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349045005s Jan 30 11:30:01.696: INFO: Pod "client-containers-d5993740-4353-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.361855266s Jan 30 11:30:03.714: INFO: Pod "client-containers-d5993740-4353-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.379271306s STEP: Saw pod success Jan 30 11:30:03.714: INFO: Pod "client-containers-d5993740-4353-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:30:03.725: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-d5993740-4353-11ea-a47a-0242ac110005 container test-container: STEP: delete the pod Jan 30 11:30:03.941: INFO: Waiting for pod client-containers-d5993740-4353-11ea-a47a-0242ac110005 to disappear Jan 30 11:30:04.084: INFO: Pod client-containers-d5993740-4353-11ea-a47a-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:30:04.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-jt62m" for this suite. Jan 30 11:30:10.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:30:10.345: INFO: namespace: e2e-tests-containers-jt62m, resource: bindings, ignored listing per whitelist Jan 30 11:30:10.417: INFO: namespace e2e-tests-containers-jt62m deletion completed in 6.309637463s • [SLOW TEST:17.334 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:30:10.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 30 11:30:10.638: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:30:27.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9rssl" for this suite. Jan 30 11:30:35.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:30:35.558: INFO: namespace: e2e-tests-init-container-9rssl, resource: bindings, ignored listing per whitelist Jan 30 11:30:35.616: INFO: namespace e2e-tests-init-container-9rssl deletion completed in 8.310291924s • [SLOW TEST:25.199 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:30:35.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-97x2h STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 30 11:30:35.775: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 30 11:31:16.152: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-97x2h PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 11:31:16.152: INFO: >>> kubeConfig: /root/.kube/config I0130 11:31:16.296825 8 log.go:172] (0xc0000ead10) (0xc001e232c0) Create stream I0130 11:31:16.297081 8 log.go:172] (0xc0000ead10) (0xc001e232c0) Stream added, broadcasting: 1 I0130 11:31:16.308173 8 log.go:172] (0xc0000ead10) Reply frame received for 1 I0130 11:31:16.308530 8 log.go:172] (0xc0000ead10) (0xc001987900) Create stream I0130 11:31:16.308586 8 log.go:172] (0xc0000ead10) (0xc001987900) Stream added, broadcasting: 3 I0130 11:31:16.311061 8 log.go:172] (0xc0000ead10) Reply frame received for 3 I0130 11:31:16.311134 8 log.go:172] (0xc0000ead10) (0xc0024fc000) Create stream I0130 11:31:16.311158 8 log.go:172] (0xc0000ead10) (0xc0024fc000) Stream added, broadcasting: 5 I0130 11:31:16.312385 8 log.go:172] (0xc0000ead10) Reply frame received for 5 I0130 11:31:17.500650 8 log.go:172] (0xc0000ead10) Data frame received for 3 I0130 11:31:17.500778 8 log.go:172] (0xc001987900) (3) Data frame handling I0130 11:31:17.500802 8 log.go:172] (0xc001987900) (3) Data frame sent I0130 11:31:17.665176 8 log.go:172] (0xc0000ead10) (0xc001987900) Stream removed, broadcasting: 3 I0130 11:31:17.665466 8 log.go:172] (0xc0000ead10) Data frame received for 1 I0130 11:31:17.665539 8 log.go:172] (0xc0000ead10) (0xc0024fc000) Stream removed, broadcasting: 5 I0130 11:31:17.665653 8 log.go:172] (0xc001e232c0) (1) Data frame handling I0130 11:31:17.665695 8 log.go:172] (0xc001e232c0) (1) Data frame sent I0130 11:31:17.665738 8 log.go:172] (0xc0000ead10) (0xc001e232c0) Stream removed, broadcasting: 1 I0130 11:31:17.665850 8 log.go:172] (0xc0000ead10) Go away received I0130 11:31:17.666694 8 log.go:172] (0xc0000ead10) (0xc001e232c0) Stream removed, broadcasting: 1 I0130 11:31:17.667144 8 log.go:172] (0xc0000ead10) (0xc001987900) Stream removed, broadcasting: 3 I0130 11:31:17.667170 8 log.go:172] (0xc0000ead10) (0xc0024fc000) Stream removed, broadcasting: 5 Jan 30 11:31:17.667: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:31:17.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-97x2h" for this suite. Jan 30 11:31:41.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:31:41.871: INFO: namespace: e2e-tests-pod-network-test-97x2h, resource: bindings, ignored listing per whitelist Jan 30 11:31:41.955: INFO: namespace e2e-tests-pod-network-test-97x2h deletion completed in 24.254483343s • [SLOW TEST:66.338 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:31:41.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 30 11:31:42.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-85fff" to be "success or failure" Jan 30 11:31:42.281: INFO: Pod "downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.730637ms Jan 30 11:31:44.301: INFO: Pod "downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033399727s Jan 30 11:31:46.313: INFO: Pod "downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045385958s Jan 30 11:31:49.439: INFO: Pod "downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.171200752s Jan 30 11:31:51.522: INFO: Pod "downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.254596854s Jan 30 11:31:53.568: INFO: Pod "downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.300848107s STEP: Saw pod success Jan 30 11:31:53.569: INFO: Pod "downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:31:53.597: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005 container client-container: STEP: delete the pod Jan 30 11:31:53.762: INFO: Waiting for pod downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005 to disappear Jan 30 11:31:53.808: INFO: Pod downwardapi-volume-16877b16-4354-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:31:53.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-85fff" for this suite. Jan 30 11:32:00.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:32:00.085: INFO: namespace: e2e-tests-downward-api-85fff, resource: bindings, ignored listing per whitelist Jan 30 11:32:00.292: INFO: namespace e2e-tests-downward-api-85fff deletion completed in 6.450322038s • [SLOW TEST:18.337 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:32:00.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:32:08.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-c9298" for this suite. Jan 30 11:32:14.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:32:14.156: INFO: namespace: e2e-tests-namespaces-c9298, resource: bindings, ignored listing per whitelist Jan 30 11:32:14.231: INFO: namespace e2e-tests-namespaces-c9298 deletion completed in 6.152206636s STEP: Destroying namespace "e2e-tests-nsdeletetest-l5g89" for this suite. Jan 30 11:32:14.234: INFO: Namespace e2e-tests-nsdeletetest-l5g89 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-2z6vp" for this suite. Jan 30 11:32:20.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:32:20.337: INFO: namespace: e2e-tests-nsdeletetest-2z6vp, resource: bindings, ignored listing per whitelist Jan 30 11:32:20.609: INFO: namespace e2e-tests-nsdeletetest-2z6vp deletion completed in 6.374667455s • [SLOW TEST:20.316 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:32:20.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-2d87fce0-4354-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 30 11:32:20.865: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-dk5t5" to be "success or failure" Jan 30 11:32:21.006: INFO: Pod "pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 140.472388ms Jan 30 11:32:23.088: INFO: Pod "pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222876196s Jan 30 11:32:25.111: INFO: Pod "pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24532742s Jan 30 11:32:27.362: INFO: Pod "pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496472162s Jan 30 11:32:29.408: INFO: Pod "pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543245227s Jan 30 11:32:31.425: INFO: Pod "pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.559386387s STEP: Saw pod success Jan 30 11:32:31.425: INFO: Pod "pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:32:31.439: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 30 11:32:32.973: INFO: Waiting for pod pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005 to disappear Jan 30 11:32:33.107: INFO: Pod pod-projected-secrets-2d88c0c8-4354-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:32:33.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dk5t5" for this suite. Jan 30 11:32:39.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:32:39.226: INFO: namespace: e2e-tests-projected-dk5t5, resource: bindings, ignored listing per whitelist Jan 30 11:32:39.399: INFO: namespace e2e-tests-projected-dk5t5 deletion completed in 6.276639095s • [SLOW TEST:18.789 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:32:39.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-q5vkk in namespace e2e-tests-proxy-9h446 I0130 11:32:39.717256 8 runners.go:184] Created replication controller with name: proxy-service-q5vkk, namespace: e2e-tests-proxy-9h446, replica count: 1 I0130 11:32:40.768974 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:32:41.770399 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:32:42.771188 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:32:43.772378 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:32:44.773184 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:32:45.773956 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:32:46.774641 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:32:47.775817 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:32:48.776811 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:32:49.778205 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0130 11:32:50.779072 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0130 11:32:51.780068 8 runners.go:184] proxy-service-q5vkk Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 30 11:32:51.908: INFO: setup took 12.338234337s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 30 11:32:51.953: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9h446/pods/http:proxy-service-q5vkk-5tj9c:160/proxy/: foo (200; 44.847823ms) Jan 30 11:32:51.953: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9h446/pods/proxy-service-q5vkk-5tj9c:160/proxy/: foo (200; 44.851272ms) Jan 30 11:32:51.956: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9h446/pods/http:proxy-service-q5vkk-5tj9c:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 30 11:36:10.306: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:10.437: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:12.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:12.784: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:14.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:14.482: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:16.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:16.453: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:18.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:18.461: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:20.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:20.462: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:22.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:22.485: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:24.439: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:24.460: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:26.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:26.462: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:28.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:28.466: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:30.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:30.464: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:32.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:32.467: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:34.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:34.466: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:36.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:36.477: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:38.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:38.461: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:40.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:40.459: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:42.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:42.456: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:44.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:44.461: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:46.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:47.030: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:48.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:48.458: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:50.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:50.457: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:52.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:52.471: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:54.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:54.458: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:56.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:56.456: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:36:58.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:36:58.457: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:00.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:00.459: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:02.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:02.456: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:04.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:04.463: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:06.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:06.469: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:08.439: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:08.468: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:10.439: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:10.466: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:12.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:12.468: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:14.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:14.459: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:16.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:16.479: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:18.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:18.466: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:20.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:20.466: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:22.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:22.475: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:24.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:24.472: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:26.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:26.477: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:28.437: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:28.459: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:30.437: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:30.474: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:32.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:32.487: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:34.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:34.471: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:36.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:36.480: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:38.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:38.467: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:40.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:40.468: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:42.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:42.470: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:44.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:44.468: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:46.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:46.482: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:48.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:48.470: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:50.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:50.459: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:52.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:52.482: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:54.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:54.457: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:56.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:56.616: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:37:58.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:37:58.468: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:00.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:00.465: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:02.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:02.457: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:04.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:04.497: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:06.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:06.473: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:08.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:08.461: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:10.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:10.461: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:12.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:12.472: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:14.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:14.469: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:16.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:16.506: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:18.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:18.483: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:20.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:20.474: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:22.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:22.469: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:24.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:24.459: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:26.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:26.504: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:28.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:28.473: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:30.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:30.498: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:32.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:32.468: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:34.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:34.479: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:36.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:36.483: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:38.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:38.481: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:40.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:40.468: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:42.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:42.457: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:44.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:44.485: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:46.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:46.471: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:48.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:48.595: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:50.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:50.487: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:52.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:52.471: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:54.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:54.482: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:56.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:56.461: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:38:58.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:38:58.468: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:39:00.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:39:00.458: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:39:02.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:39:02.457: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:39:04.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:39:04.461: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:39:06.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:39:06.519: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:39:08.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:39:08.462: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:39:10.438: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:39:10.467: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 11:39:10.467: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 11:39:10.495: INFO: Pod pod-with-poststart-exec-hook still exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 STEP: Collecting events from namespace "e2e-tests-container-lifecycle-hook-dq2qh". STEP: Found 12 events. Jan 30 11:39:10.555: INFO: At 2020-01-30 11:33:05 +0000 UTC - event for pod-handle-http-request: {default-scheduler } Scheduled: Successfully assigned e2e-tests-container-lifecycle-hook-dq2qh/pod-handle-http-request to hunter-server-hu5at5svl7ps Jan 30 11:39:10.555: INFO: At 2020-01-30 11:33:10 +0000 UTC - event for pod-handle-http-request: {kubelet hunter-server-hu5at5svl7ps} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec:1.1" already present on machine Jan 30 11:39:10.555: INFO: At 2020-01-30 11:33:13 +0000 UTC - event for pod-handle-http-request: {kubelet hunter-server-hu5at5svl7ps} Created: Created container Jan 30 11:39:10.555: INFO: At 2020-01-30 11:33:14 +0000 UTC - event for pod-handle-http-request: {kubelet hunter-server-hu5at5svl7ps} Started: Started container Jan 30 11:39:10.555: INFO: At 2020-01-30 11:33:16 +0000 UTC - event for pod-with-poststart-exec-hook: {default-scheduler } Scheduled: Successfully assigned e2e-tests-container-lifecycle-hook-dq2qh/pod-with-poststart-exec-hook to hunter-server-hu5at5svl7ps Jan 30 11:39:10.555: INFO: At 2020-01-30 11:33:20 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/hostexec:1.1" already present on machine Jan 30 11:39:10.555: INFO: At 2020-01-30 11:33:23 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Created: Created container Jan 30 11:39:10.555: INFO: At 2020-01-30 11:33:24 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Started: Started container Jan 30 11:39:10.555: INFO: At 2020-01-30 11:35:35 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} FailedPostStartHook: Exec lifecycle hook ([sh -c curl http://10.32.0.4:8080/echo?msg=poststart]) for Container "pod-with-poststart-exec-hook" in Pod "pod-with-poststart-exec-hook_e2e-tests-container-lifecycle-hook-dq2qh(4e6acb97-4354-11ea-a994-fa163e34d433)" failed - error: command 'sh -c curl http://10.32.0.4:8080/echo?msg=poststart' exited with 7: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:07 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:08 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:11 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:12 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:13 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:14 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:15 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:16 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:17 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:18 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:19 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:20 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:21 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:22 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:23 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:24 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:25 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:26 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:27 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:28 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:29 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:30 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:31 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:32 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:33 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:34 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:35 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:36 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:37 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:38 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:39 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:40 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:41 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:42 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:43 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:44 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:45 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:46 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:47 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:48 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:49 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:50 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:51 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:52 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:53 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:54 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:55 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:56 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:57 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:58 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:59 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:00 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:04 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:05 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:06 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:07 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:08 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:09 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:10 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:11 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:12 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:13 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:14 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:15 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:16 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:17 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:18 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:19 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:20 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:21 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:22 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:23 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:24 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:25 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:26 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:27 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:28 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:29 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:30 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:31 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:32 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:33 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:34 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:35 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:36 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:37 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:38 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:39 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:40 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:41 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:42 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:43 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:44 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:45 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:46 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:47 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:48 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:49 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:50 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:51 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:52 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:53 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:54 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:55 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:56 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:57 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:58 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:01:59 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:02:00 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:02:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:02:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:02:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:02:04 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:02:05 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:02:06 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:02:07 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:02:08 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:02:09 --:--:-- 0curl: (7) Failed to connect to 10.32.0.4 port 8080: Operation timed out , message: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:07 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:08 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:11 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:12 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:13 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:14 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:15 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:16 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:17 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:18 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:19 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:20 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:21 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:22 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:23 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:24 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:25 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:26 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:27 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:28 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:29 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:30 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:31 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:32 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:33 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:34 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:35 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:36 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:37 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:38 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:39 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:40 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:41 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:42 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:43 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:44 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:45 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:46 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:47 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:48 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:49 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:50 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:51 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:52 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:53 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:54 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:55 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:56 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:57 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:58 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:59 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:00 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:01 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:02 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:03 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:04 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:05 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:06 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:07 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:08 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:09 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:10 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:11 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:12 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:13 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:14 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:15 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:16 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:17 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:18 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:19 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:20 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:21 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:22 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:23 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:24 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:25 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:26 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:27 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:28 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:29 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:30 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:31 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:32 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:33 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:34 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:35 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:36 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:37 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:38 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:39 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:40 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:41 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:42 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:43 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:44 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:45 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:46 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:47 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:48 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:49 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:50 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:51 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:52 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:53 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:54 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:55 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:56 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:57 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:58 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:01:59 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:02:00 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:02:01 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:02:02 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:02:03 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:02:04 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:02:05 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:02:06 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:02:07 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:02:08 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:02:09 --:--:-- 0curl: (7) Failed to connect to 10.32.0.4 port 8080: Operation timed out\n" Jan 30 11:39:10.555: INFO: At 2020-01-30 11:36:06 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Killing: Killing container with id docker://pod-with-poststart-exec-hook:FailedPostStartHook Jan 30 11:39:10.555: INFO: At 2020-01-30 11:36:10 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 11:39:10.555: INFO: At 2020-01-30 11:36:42 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Killing: Killing container with id docker://pod-with-poststart-exec-hook:Need to kill Pod Jan 30 11:39:10.634: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 11:39:10.634: INFO: pod-handle-http-request hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:33:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:33:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:33:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:33:05 +0000 UTC }] Jan 30 11:39:10.635: INFO: pod-with-poststart-exec-hook hunter-server-hu5at5svl7ps Running 15s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:33:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:36:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:36:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 11:33:16 +0000 UTC }] Jan 30 11:39:10.635: INFO: coredns-54ff9cd656-79kxx hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC }] Jan 30 11:39:10.635: INFO: coredns-54ff9cd656-bmkk4 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC }] Jan 30 11:39:10.635: INFO: etcd-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Jan 30 11:39:10.635: INFO: kube-apiserver-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Jan 30 11:39:10.635: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Jan 30 11:39:10.635: INFO: kube-proxy-bqnnz hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC }] Jan 30 11:39:10.635: INFO: kube-scheduler-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Jan 30 11:39:10.635: INFO: weave-net-tqwf2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:11:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:11:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC }] Jan 30 11:39:10.635: INFO: Jan 30 11:39:10.651: INFO: Logging node info for node hunter-server-hu5at5svl7ps Jan 30 11:39:10.665: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:19963719,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-30 11:39:06 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-30 11:39:06 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-30 11:39:06 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-30 11:39:06 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717 nginx:latest] 126698067} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Jan 30 11:39:10.666: INFO: Logging kubelet events for node hunter-server-hu5at5svl7ps Jan 30 11:39:10.674: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps Jan 30 11:39:10.700: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded) Jan 30 11:39:10.700: INFO: Container kube-proxy ready: true, restart count 0 Jan 30 11:39:10.700: INFO: etcd-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Jan 30 11:39:10.700: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded) Jan 30 11:39:10.700: INFO: Container weave ready: true, restart count 0 Jan 30 11:39:10.700: INFO: Container weave-npc ready: true, restart count 0 Jan 30 11:39:10.700: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded) Jan 30 11:39:10.700: INFO: Container coredns ready: true, restart count 0 Jan 30 11:39:10.700: INFO: pod-with-poststart-exec-hook started at 2020-01-30 11:33:16 +0000 UTC (0+1 container statuses recorded) Jan 30 11:39:10.700: INFO: Container pod-with-poststart-exec-hook ready: true, restart count 1 Jan 30 11:39:10.700: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Jan 30 11:39:10.700: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Jan 30 11:39:10.700: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Jan 30 11:39:10.700: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded) Jan 30 11:39:10.700: INFO: Container coredns ready: true, restart count 0 Jan 30 11:39:10.700: INFO: pod-handle-http-request started at 2020-01-30 11:33:05 +0000 UTC (0+1 container statuses recorded) Jan 30 11:39:10.700: INFO: Container pod-handle-http-request ready: true, restart count 0 W0130 11:39:10.716470 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 11:39:10.771: INFO: Latency metrics for node hunter-server-hu5at5svl7ps Jan 30 11:39:10.772: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m50.725165s} Jan 30 11:39:10.772: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m50.725165s} Jan 30 11:39:10.772: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m50.725165s} Jan 30 11:39:10.772: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:31.261999s} Jan 30 11:39:10.772: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:12.317933s} Jan 30 11:39:10.772: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.021496s} Jan 30 11:39:10.772: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.5 Latency:12.005857s} Jan 30 11:39:10.772: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:11.142092s} Jan 30 11:39:10.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dq2qh" for this suite. Jan 30 11:39:44.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:39:44.947: INFO: namespace: e2e-tests-container-lifecycle-hook-dq2qh, resource: bindings, ignored listing per whitelist Jan 30 11:39:45.037: INFO: namespace e2e-tests-container-lifecycle-hook-dq2qh deletion completed in 34.253035725s • Failure [399.519 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 wait for pod "pod-with-poststart-exec-hook" to disappear Expected success, but got an error: <*errors.errorString | 0xc0000a18b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:39:45.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 30 11:39:45.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-vlll4' Jan 30 11:39:47.135: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 30 11:39:47.136: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 30 11:39:47.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-vlll4' Jan 30 11:39:47.429: INFO: stderr: "" Jan 30 11:39:47.429: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:39:47.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vlll4" for this suite. Jan 30 11:39:55.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:39:55.710: INFO: namespace: e2e-tests-kubectl-vlll4, resource: bindings, ignored listing per whitelist Jan 30 11:39:55.715: INFO: namespace e2e-tests-kubectl-vlll4 deletion completed in 8.264530889s • [SLOW TEST:10.677 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:39:55.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 30 11:39:55.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cdbwt' Jan 30 11:39:56.467: INFO: stderr: "" Jan 30 11:39:56.467: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 30 11:39:57.481: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:39:57.481: INFO: Found 0 / 1 Jan 30 11:39:58.505: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:39:58.505: INFO: Found 0 / 1 Jan 30 11:39:59.484: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:39:59.484: INFO: Found 0 / 1 Jan 30 11:40:00.522: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:40:00.522: INFO: Found 0 / 1 Jan 30 11:40:01.518: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:40:01.518: INFO: Found 0 / 1 Jan 30 11:40:02.619: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:40:02.619: INFO: Found 0 / 1 Jan 30 11:40:03.535: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:40:03.535: INFO: Found 0 / 1 Jan 30 11:40:04.509: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:40:04.509: INFO: Found 0 / 1 Jan 30 11:40:05.508: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:40:05.508: INFO: Found 0 / 1 Jan 30 11:40:06.497: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:40:06.498: INFO: Found 0 / 1 Jan 30 11:40:07.513: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:40:07.514: INFO: Found 1 / 1 Jan 30 11:40:07.514: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 30 11:40:07.519: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:40:07.519: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 30 11:40:07.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-msqjc --namespace=e2e-tests-kubectl-cdbwt -p {"metadata":{"annotations":{"x":"y"}}}' Jan 30 11:40:07.763: INFO: stderr: "" Jan 30 11:40:07.763: INFO: stdout: "pod/redis-master-msqjc patched\n" STEP: checking annotations Jan 30 11:40:07.837: INFO: Selector matched 1 pods for map[app:redis] Jan 30 11:40:07.837: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:40:07.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cdbwt" for this suite. Jan 30 11:40:31.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:40:32.157: INFO: namespace: e2e-tests-kubectl-cdbwt, resource: bindings, ignored listing per whitelist Jan 30 11:40:32.181: INFO: namespace e2e-tests-kubectl-cdbwt deletion completed in 24.332790081s • [SLOW TEST:36.465 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:40:32.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jan 30 11:40:32.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 30 11:40:32.725: INFO: stderr: "" Jan 30 11:40:32.726: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:40:32.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9bw7h" for this suite. Jan 30 11:40:38.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:40:38.868: INFO: namespace: e2e-tests-kubectl-9bw7h, resource: bindings, ignored listing per whitelist Jan 30 11:40:38.951: INFO: namespace e2e-tests-kubectl-9bw7h deletion completed in 6.195368974s • [SLOW TEST:6.770 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:40:38.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 30 11:40:39.138: INFO: Waiting up to 5m0s for pod "downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-fmmvg" to be "success or failure" Jan 30 11:40:39.149: INFO: Pod "downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.295914ms Jan 30 11:40:41.161: INFO: Pod "downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02287233s Jan 30 11:40:43.171: INFO: Pod "downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032573844s Jan 30 11:40:45.191: INFO: Pod "downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052716656s Jan 30 11:40:47.235: INFO: Pod "downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097028196s Jan 30 11:40:49.256: INFO: Pod "downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117558987s STEP: Saw pod success Jan 30 11:40:49.256: INFO: Pod "downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:40:49.260: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005 container client-container: STEP: delete the pod Jan 30 11:40:49.300: INFO: Waiting for pod downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005 to disappear Jan 30 11:40:49.304: INFO: Pod downwardapi-volume-568742cb-4355-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:40:49.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fmmvg" for this suite. Jan 30 11:40:55.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:40:55.631: INFO: namespace: e2e-tests-projected-fmmvg, resource: bindings, ignored listing per whitelist Jan 30 11:40:55.708: INFO: namespace e2e-tests-projected-fmmvg deletion completed in 6.398545508s • [SLOW TEST:16.756 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:40:55.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 30 11:40:55.884: INFO: PodSpec: initContainers in spec.initContainers Jan 30 11:42:05.700: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-60852cbf-4355-11ea-a47a-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-22wf2", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-22wf2/pods/pod-init-60852cbf-4355-11ea-a47a-0242ac110005", UID:"608c94ae-4355-11ea-a994-fa163e34d433", ResourceVersion:"19964073", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715981255, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"884340662"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dbscf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001d94680), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dbscf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dbscf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dbscf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022c0a78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001eb89c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022c0b80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022c0bd0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022c0bd8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022c0bdc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715981256, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715981256, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715981256, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715981256, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0024403c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00152bea0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00152bf80)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://e2b489281ed79c5cd0b5de893a787f394e1054171d78fab401dfd479da21a748"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002440400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024403e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:42:05.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-22wf2" for this suite. Jan 30 11:42:29.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:42:30.051: INFO: namespace: e2e-tests-init-container-22wf2, resource: bindings, ignored listing per whitelist Jan 30 11:42:30.058: INFO: namespace e2e-tests-init-container-22wf2 deletion completed in 24.315220642s • [SLOW TEST:94.349 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:42:30.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-98c3d0f9-4355-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 30 11:42:30.271: INFO: Waiting up to 5m0s for pod "pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005" in namespace "e2e-tests-configmap-6stll" to be "success or failure" Jan 30 11:42:30.286: INFO: Pod "pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.841805ms Jan 30 11:42:32.461: INFO: Pod "pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190085752s Jan 30 11:42:34.475: INFO: Pod "pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203779723s Jan 30 11:42:36.519: INFO: Pod "pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247899146s Jan 30 11:42:39.136: INFO: Pod "pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.864664807s Jan 30 11:42:41.154: INFO: Pod "pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.883245355s STEP: Saw pod success Jan 30 11:42:41.155: INFO: Pod "pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:42:41.172: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 30 11:42:41.497: INFO: Waiting for pod pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005 to disappear Jan 30 11:42:41.513: INFO: Pod pod-configmaps-98c54d5e-4355-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:42:41.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6stll" for this suite. Jan 30 11:42:47.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:42:47.764: INFO: namespace: e2e-tests-configmap-6stll, resource: bindings, ignored listing per whitelist Jan 30 11:42:47.780: INFO: namespace e2e-tests-configmap-6stll deletion completed in 6.257912515s • [SLOW TEST:17.721 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:42:47.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jan 30 11:42:48.684: INFO: created pod pod-service-account-defaultsa Jan 30 11:42:48.684: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 30 11:42:48.698: INFO: created pod pod-service-account-mountsa Jan 30 11:42:48.698: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 30 11:42:48.766: INFO: created pod pod-service-account-nomountsa Jan 30 11:42:48.766: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 30 11:42:48.891: INFO: created pod pod-service-account-defaultsa-mountspec Jan 30 11:42:48.892: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 30 11:42:48.942: INFO: created pod pod-service-account-mountsa-mountspec Jan 30 11:42:48.943: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 30 11:42:48.969: INFO: created pod pod-service-account-nomountsa-mountspec Jan 30 11:42:48.969: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 30 11:42:49.147: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 30 11:42:49.148: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 30 11:42:49.360: INFO: created pod pod-service-account-mountsa-nomountspec Jan 30 11:42:49.361: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 30 11:42:49.397: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 30 11:42:49.397: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:42:49.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-4gvxb" for this suite. Jan 30 11:43:17.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:43:17.125: INFO: namespace: e2e-tests-svcaccounts-4gvxb, resource: bindings, ignored listing per whitelist Jan 30 11:43:17.215: INFO: namespace e2e-tests-svcaccounts-4gvxb deletion completed in 27.77549681s • [SLOW TEST:29.435 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:43:17.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-b4dca617-4355-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 30 11:43:17.459: INFO: Waiting up to 5m0s for pod "pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005" in namespace "e2e-tests-configmap-wxhz6" to be "success or failure" Jan 30 11:43:17.470: INFO: Pod "pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.00109ms Jan 30 11:43:19.481: INFO: Pod "pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022264185s Jan 30 11:43:21.504: INFO: Pod "pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045205023s Jan 30 11:43:23.847: INFO: Pod "pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.387948004s Jan 30 11:43:25.883: INFO: Pod "pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424027419s Jan 30 11:43:27.899: INFO: Pod "pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.440187769s STEP: Saw pod success Jan 30 11:43:27.899: INFO: Pod "pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:43:27.907: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 30 11:43:28.664: INFO: Waiting for pod pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005 to disappear Jan 30 11:43:28.679: INFO: Pod pod-configmaps-b4dea47a-4355-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:43:28.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wxhz6" for this suite. Jan 30 11:43:36.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:43:36.890: INFO: namespace: e2e-tests-configmap-wxhz6, resource: bindings, ignored listing per whitelist Jan 30 11:43:36.942: INFO: namespace e2e-tests-configmap-wxhz6 deletion completed in 8.250435802s • [SLOW TEST:19.726 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:43:36.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 30 11:43:37.277: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-fqzdf" to be "success or failure" Jan 30 11:43:37.292: INFO: Pod "downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.437232ms Jan 30 11:43:39.491: INFO: Pod "downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213661556s Jan 30 11:43:41.509: INFO: Pod "downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230934392s Jan 30 11:43:43.533: INFO: Pod "downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.254918103s Jan 30 11:43:45.545: INFO: Pod "downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.266995379s Jan 30 11:43:47.579: INFO: Pod "downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.301103968s Jan 30 11:43:49.885: INFO: Pod "downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.606902378s STEP: Saw pod success Jan 30 11:43:49.885: INFO: Pod "downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:43:49.894: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005 container client-container: STEP: delete the pod Jan 30 11:43:49.985: INFO: Waiting for pod downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005 to disappear Jan 30 11:43:50.144: INFO: Pod downwardapi-volume-c0b04c00-4355-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:43:50.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fqzdf" for this suite. Jan 30 11:43:56.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:43:56.251: INFO: namespace: e2e-tests-downward-api-fqzdf, resource: bindings, ignored listing per whitelist Jan 30 11:43:56.429: INFO: namespace e2e-tests-downward-api-fqzdf deletion completed in 6.274537144s • [SLOW TEST:19.487 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:43:56.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-qqhbb I0130 11:43:56.773398 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-qqhbb, replica count: 1 I0130 11:43:57.875137 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:43:58.875843 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:43:59.876466 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:44:00.877234 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:44:01.879552 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:44:02.880987 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 11:44:03.882003 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 30 11:44:04.029: INFO: Created: latency-svc-246cf Jan 30 11:44:04.185: INFO: Got endpoints: latency-svc-246cf [201.656976ms] Jan 30 11:44:04.335: INFO: Created: latency-svc-ntj76 Jan 30 11:44:04.340: INFO: Got endpoints: latency-svc-ntj76 [154.444688ms] Jan 30 11:44:04.393: INFO: Created: latency-svc-ws2mx Jan 30 11:44:04.524: INFO: Created: latency-svc-wqmn6 Jan 30 11:44:04.524: INFO: Got endpoints: latency-svc-ws2mx [336.295171ms] Jan 30 11:44:04.537: INFO: Got endpoints: latency-svc-wqmn6 [348.790857ms] Jan 30 11:44:04.590: INFO: Created: latency-svc-wxb6d Jan 30 11:44:04.608: INFO: Got endpoints: latency-svc-wxb6d [419.959453ms] Jan 30 11:44:04.691: INFO: Created: latency-svc-lrh7p Jan 30 11:44:04.704: INFO: Got endpoints: latency-svc-lrh7p [516.048708ms] Jan 30 11:44:04.775: INFO: Created: latency-svc-j6jbx Jan 30 11:44:05.011: INFO: Got endpoints: latency-svc-j6jbx [822.710306ms] Jan 30 11:44:05.033: INFO: Created: latency-svc-bvd5l Jan 30 11:44:05.093: INFO: Got endpoints: latency-svc-bvd5l [904.92511ms] Jan 30 11:44:05.225: INFO: Created: latency-svc-mg2hh Jan 30 11:44:05.257: INFO: Got endpoints: latency-svc-mg2hh [1.068630534s] Jan 30 11:44:05.441: INFO: Created: latency-svc-cbzpq Jan 30 11:44:05.471: INFO: Got endpoints: latency-svc-cbzpq [377.401255ms] Jan 30 11:44:05.510: INFO: Created: latency-svc-wkndm Jan 30 11:44:05.518: INFO: Got endpoints: latency-svc-wkndm [1.330176095s] Jan 30 11:44:05.718: INFO: Created: latency-svc-z5nx6 Jan 30 11:44:05.758: INFO: Got endpoints: latency-svc-z5nx6 [1.569482244s] Jan 30 11:44:06.170: INFO: Created: latency-svc-x5vvf Jan 30 11:44:06.170: INFO: Got endpoints: latency-svc-x5vvf [1.984393242s] Jan 30 11:44:06.411: INFO: Created: latency-svc-zcr4f Jan 30 11:44:06.426: INFO: Got endpoints: latency-svc-zcr4f [2.237651936s] Jan 30 11:44:06.691: INFO: Created: latency-svc-869tr Jan 30 11:44:06.727: INFO: Got endpoints: latency-svc-869tr [2.54087536s] Jan 30 11:44:06.926: INFO: Created: latency-svc-ssp4s Jan 30 11:44:06.966: INFO: Got endpoints: latency-svc-ssp4s [2.778372039s] Jan 30 11:44:07.139: INFO: Created: latency-svc-l8h8p Jan 30 11:44:07.156: INFO: Got endpoints: latency-svc-l8h8p [2.967176695s] Jan 30 11:44:07.196: INFO: Created: latency-svc-q2pc9 Jan 30 11:44:07.238: INFO: Got endpoints: latency-svc-q2pc9 [2.897452702s] Jan 30 11:44:07.476: INFO: Created: latency-svc-s85j7 Jan 30 11:44:07.502: INFO: Got endpoints: latency-svc-s85j7 [2.978243182s] Jan 30 11:44:07.691: INFO: Created: latency-svc-hqdxj Jan 30 11:44:07.703: INFO: Got endpoints: latency-svc-hqdxj [3.166369692s] Jan 30 11:44:07.767: INFO: Created: latency-svc-k97l2 Jan 30 11:44:07.877: INFO: Got endpoints: latency-svc-k97l2 [3.269136798s] Jan 30 11:44:07.936: INFO: Created: latency-svc-48qcb Jan 30 11:44:07.945: INFO: Got endpoints: latency-svc-48qcb [3.241051232s] Jan 30 11:44:08.188: INFO: Created: latency-svc-kmxzn Jan 30 11:44:08.202: INFO: Got endpoints: latency-svc-kmxzn [3.190744014s] Jan 30 11:44:08.230: INFO: Created: latency-svc-k26m6 Jan 30 11:44:08.253: INFO: Got endpoints: latency-svc-k26m6 [2.995747523s] Jan 30 11:44:08.408: INFO: Created: latency-svc-srgch Jan 30 11:44:08.419: INFO: Got endpoints: latency-svc-srgch [2.947594069s] Jan 30 11:44:08.611: INFO: Created: latency-svc-zlzrj Jan 30 11:44:08.671: INFO: Got endpoints: latency-svc-zlzrj [3.153099609s] Jan 30 11:44:08.843: INFO: Created: latency-svc-5kxrh Jan 30 11:44:08.890: INFO: Got endpoints: latency-svc-5kxrh [3.13192405s] Jan 30 11:44:09.080: INFO: Created: latency-svc-x4fqx Jan 30 11:44:09.119: INFO: Got endpoints: latency-svc-x4fqx [2.94865409s] Jan 30 11:44:09.280: INFO: Created: latency-svc-dxr2x Jan 30 11:44:09.343: INFO: Got endpoints: latency-svc-dxr2x [2.916564145s] Jan 30 11:44:09.468: INFO: Created: latency-svc-swzw5 Jan 30 11:44:09.477: INFO: Got endpoints: latency-svc-swzw5 [2.749778148s] Jan 30 11:44:09.715: INFO: Created: latency-svc-5qx2d Jan 30 11:44:09.734: INFO: Got endpoints: latency-svc-5qx2d [2.766847975s] Jan 30 11:44:09.936: INFO: Created: latency-svc-2l8kc Jan 30 11:44:09.946: INFO: Got endpoints: latency-svc-2l8kc [2.790712157s] Jan 30 11:44:10.164: INFO: Created: latency-svc-dztp6 Jan 30 11:44:10.216: INFO: Got endpoints: latency-svc-dztp6 [2.978077848s] Jan 30 11:44:10.225: INFO: Created: latency-svc-vxhq9 Jan 30 11:44:10.228: INFO: Got endpoints: latency-svc-vxhq9 [2.725021189s] Jan 30 11:44:10.398: INFO: Created: latency-svc-rdkpv Jan 30 11:44:10.422: INFO: Got endpoints: latency-svc-rdkpv [2.718557295s] Jan 30 11:44:10.481: INFO: Created: latency-svc-84q7p Jan 30 11:44:10.651: INFO: Got endpoints: latency-svc-84q7p [2.773095747s] Jan 30 11:44:10.668: INFO: Created: latency-svc-gk4gc Jan 30 11:44:10.881: INFO: Got endpoints: latency-svc-gk4gc [2.936418938s] Jan 30 11:44:10.885: INFO: Created: latency-svc-dz7r5 Jan 30 11:44:10.905: INFO: Got endpoints: latency-svc-dz7r5 [2.703099031s] Jan 30 11:44:11.152: INFO: Created: latency-svc-n5v4l Jan 30 11:44:11.170: INFO: Got endpoints: latency-svc-n5v4l [2.916881947s] Jan 30 11:44:11.380: INFO: Created: latency-svc-9q97p Jan 30 11:44:11.381: INFO: Got endpoints: latency-svc-9q97p [2.962613838s] Jan 30 11:44:11.456: INFO: Created: latency-svc-pjn7t Jan 30 11:44:11.632: INFO: Got endpoints: latency-svc-pjn7t [2.959887099s] Jan 30 11:44:11.674: INFO: Created: latency-svc-m9qcs Jan 30 11:44:11.683: INFO: Got endpoints: latency-svc-m9qcs [2.792428401s] Jan 30 11:44:11.747: INFO: Created: latency-svc-pq6zw Jan 30 11:44:11.856: INFO: Got endpoints: latency-svc-pq6zw [2.73671602s] Jan 30 11:44:12.330: INFO: Created: latency-svc-ms6tx Jan 30 11:44:12.340: INFO: Got endpoints: latency-svc-ms6tx [2.996428102s] Jan 30 11:44:12.456: INFO: Created: latency-svc-f56wz Jan 30 11:44:12.477: INFO: Got endpoints: latency-svc-f56wz [2.999982188s] Jan 30 11:44:13.095: INFO: Created: latency-svc-4lfrv Jan 30 11:44:13.367: INFO: Got endpoints: latency-svc-4lfrv [3.632459367s] Jan 30 11:44:13.479: INFO: Created: latency-svc-dgnbx Jan 30 11:44:13.576: INFO: Got endpoints: latency-svc-dgnbx [3.629678919s] Jan 30 11:44:13.748: INFO: Created: latency-svc-lr9m5 Jan 30 11:44:13.776: INFO: Got endpoints: latency-svc-lr9m5 [3.559413206s] Jan 30 11:44:13.817: INFO: Created: latency-svc-z26xn Jan 30 11:44:13.949: INFO: Got endpoints: latency-svc-z26xn [3.721064509s] Jan 30 11:44:13.954: INFO: Created: latency-svc-tbzc9 Jan 30 11:44:13.986: INFO: Got endpoints: latency-svc-tbzc9 [3.562981044s] Jan 30 11:44:14.178: INFO: Created: latency-svc-pwdxg Jan 30 11:44:14.218: INFO: Got endpoints: latency-svc-pwdxg [3.566493972s] Jan 30 11:44:14.266: INFO: Created: latency-svc-9vtmt Jan 30 11:44:14.456: INFO: Got endpoints: latency-svc-9vtmt [3.574608704s] Jan 30 11:44:14.478: INFO: Created: latency-svc-2qhjh Jan 30 11:44:14.493: INFO: Got endpoints: latency-svc-2qhjh [3.587525039s] Jan 30 11:44:14.546: INFO: Created: latency-svc-rzznl Jan 30 11:44:14.640: INFO: Got endpoints: latency-svc-rzznl [3.469783592s] Jan 30 11:44:14.674: INFO: Created: latency-svc-rsnvj Jan 30 11:44:14.679: INFO: Got endpoints: latency-svc-rsnvj [3.297520287s] Jan 30 11:44:14.710: INFO: Created: latency-svc-xg5sc Jan 30 11:44:14.726: INFO: Got endpoints: latency-svc-xg5sc [3.093660499s] Jan 30 11:44:14.847: INFO: Created: latency-svc-c7ck9 Jan 30 11:44:14.858: INFO: Got endpoints: latency-svc-c7ck9 [3.174201267s] Jan 30 11:44:14.934: INFO: Created: latency-svc-p29ct Jan 30 11:44:14.935: INFO: Got endpoints: latency-svc-p29ct [3.078060566s] Jan 30 11:44:15.045: INFO: Created: latency-svc-c2p4l Jan 30 11:44:15.054: INFO: Got endpoints: latency-svc-c2p4l [2.714303374s] Jan 30 11:44:15.125: INFO: Created: latency-svc-jtm8p Jan 30 11:44:15.292: INFO: Got endpoints: latency-svc-jtm8p [2.814486665s] Jan 30 11:44:15.316: INFO: Created: latency-svc-x6fmd Jan 30 11:44:15.343: INFO: Got endpoints: latency-svc-x6fmd [1.975854359s] Jan 30 11:44:15.393: INFO: Created: latency-svc-ff6m4 Jan 30 11:44:15.498: INFO: Got endpoints: latency-svc-ff6m4 [1.921036174s] Jan 30 11:44:15.503: INFO: Created: latency-svc-vrjtc Jan 30 11:44:15.520: INFO: Got endpoints: latency-svc-vrjtc [1.743963473s] Jan 30 11:44:15.564: INFO: Created: latency-svc-5sf46 Jan 30 11:44:15.573: INFO: Got endpoints: latency-svc-5sf46 [1.623197478s] Jan 30 11:44:15.724: INFO: Created: latency-svc-7vkx7 Jan 30 11:44:15.739: INFO: Got endpoints: latency-svc-7vkx7 [1.753390521s] Jan 30 11:44:15.793: INFO: Created: latency-svc-4qrhv Jan 30 11:44:15.802: INFO: Got endpoints: latency-svc-4qrhv [1.584360122s] Jan 30 11:44:16.220: INFO: Created: latency-svc-nqd6c Jan 30 11:44:16.220: INFO: Created: latency-svc-zmpnn Jan 30 11:44:16.233: INFO: Got endpoints: latency-svc-zmpnn [1.740264395s] Jan 30 11:44:16.247: INFO: Got endpoints: latency-svc-nqd6c [1.789541861s] Jan 30 11:44:16.516: INFO: Created: latency-svc-88w5r Jan 30 11:44:16.516: INFO: Got endpoints: latency-svc-88w5r [1.876315139s] Jan 30 11:44:16.738: INFO: Created: latency-svc-fc8g4 Jan 30 11:44:16.787: INFO: Got endpoints: latency-svc-fc8g4 [2.107598278s] Jan 30 11:44:16.976: INFO: Created: latency-svc-5ws4m Jan 30 11:44:16.977: INFO: Got endpoints: latency-svc-5ws4m [2.250884372s] Jan 30 11:44:17.125: INFO: Created: latency-svc-s8q4m Jan 30 11:44:17.159: INFO: Got endpoints: latency-svc-s8q4m [2.301258962s] Jan 30 11:44:17.334: INFO: Created: latency-svc-j5ffv Jan 30 11:44:17.383: INFO: Got endpoints: latency-svc-j5ffv [2.447943625s] Jan 30 11:44:17.399: INFO: Created: latency-svc-kkm29 Jan 30 11:44:17.587: INFO: Got endpoints: latency-svc-kkm29 [2.532400998s] Jan 30 11:44:17.603: INFO: Created: latency-svc-9tp2w Jan 30 11:44:17.605: INFO: Got endpoints: latency-svc-9tp2w [2.312820282s] Jan 30 11:44:17.887: INFO: Created: latency-svc-l9xxf Jan 30 11:44:18.032: INFO: Got endpoints: latency-svc-l9xxf [2.688588472s] Jan 30 11:44:18.111: INFO: Created: latency-svc-86gmf Jan 30 11:44:18.201: INFO: Got endpoints: latency-svc-86gmf [2.702783049s] Jan 30 11:44:18.231: INFO: Created: latency-svc-gvgrv Jan 30 11:44:18.238: INFO: Got endpoints: latency-svc-gvgrv [2.717546692s] Jan 30 11:44:18.298: INFO: Created: latency-svc-txms7 Jan 30 11:44:18.413: INFO: Got endpoints: latency-svc-txms7 [2.839961682s] Jan 30 11:44:18.477: INFO: Created: latency-svc-pr44w Jan 30 11:44:18.600: INFO: Got endpoints: latency-svc-pr44w [2.859830647s] Jan 30 11:44:18.628: INFO: Created: latency-svc-qnqbq Jan 30 11:44:18.668: INFO: Created: latency-svc-m5qn6 Jan 30 11:44:18.676: INFO: Got endpoints: latency-svc-qnqbq [2.873539794s] Jan 30 11:44:18.690: INFO: Got endpoints: latency-svc-m5qn6 [2.456488918s] Jan 30 11:44:18.956: INFO: Created: latency-svc-xl76l Jan 30 11:44:18.978: INFO: Got endpoints: latency-svc-xl76l [2.73093842s] Jan 30 11:44:19.174: INFO: Created: latency-svc-sdv7j Jan 30 11:44:19.202: INFO: Got endpoints: latency-svc-sdv7j [2.685676699s] Jan 30 11:44:19.264: INFO: Created: latency-svc-czfct Jan 30 11:44:19.362: INFO: Got endpoints: latency-svc-czfct [2.574264023s] Jan 30 11:44:19.416: INFO: Created: latency-svc-glwvm Jan 30 11:44:19.563: INFO: Got endpoints: latency-svc-glwvm [2.586264348s] Jan 30 11:44:19.586: INFO: Created: latency-svc-6m2rh Jan 30 11:44:19.613: INFO: Got endpoints: latency-svc-6m2rh [2.45334887s] Jan 30 11:44:19.663: INFO: Created: latency-svc-7d28g Jan 30 11:44:19.775: INFO: Got endpoints: latency-svc-7d28g [2.392078851s] Jan 30 11:44:19.890: INFO: Created: latency-svc-wxb6s Jan 30 11:44:20.155: INFO: Got endpoints: latency-svc-wxb6s [2.56833047s] Jan 30 11:44:20.188: INFO: Created: latency-svc-9fdhx Jan 30 11:44:20.222: INFO: Got endpoints: latency-svc-9fdhx [2.616582353s] Jan 30 11:44:20.379: INFO: Created: latency-svc-k6ccf Jan 30 11:44:20.387: INFO: Got endpoints: latency-svc-k6ccf [2.35497843s] Jan 30 11:44:20.428: INFO: Created: latency-svc-8zn2j Jan 30 11:44:20.593: INFO: Got endpoints: latency-svc-8zn2j [2.391606687s] Jan 30 11:44:20.608: INFO: Created: latency-svc-dgc5j Jan 30 11:44:20.622: INFO: Got endpoints: latency-svc-dgc5j [2.384059673s] Jan 30 11:44:20.808: INFO: Created: latency-svc-mrk6s Jan 30 11:44:20.836: INFO: Got endpoints: latency-svc-mrk6s [2.423113573s] Jan 30 11:44:20.962: INFO: Created: latency-svc-khbps Jan 30 11:44:20.975: INFO: Got endpoints: latency-svc-khbps [2.374721756s] Jan 30 11:44:21.043: INFO: Created: latency-svc-7wwcl Jan 30 11:44:21.140: INFO: Got endpoints: latency-svc-7wwcl [2.463718179s] Jan 30 11:44:21.155: INFO: Created: latency-svc-q5sqv Jan 30 11:44:21.187: INFO: Got endpoints: latency-svc-q5sqv [2.497058509s] Jan 30 11:44:21.470: INFO: Created: latency-svc-d2hrl Jan 30 11:44:22.539: INFO: Got endpoints: latency-svc-d2hrl [3.56094253s] Jan 30 11:44:22.632: INFO: Created: latency-svc-zz5fs Jan 30 11:44:22.805: INFO: Got endpoints: latency-svc-zz5fs [3.602613088s] Jan 30 11:44:23.322: INFO: Created: latency-svc-qljhh Jan 30 11:44:23.354: INFO: Got endpoints: latency-svc-qljhh [3.991344768s] Jan 30 11:44:23.491: INFO: Created: latency-svc-ch7q4 Jan 30 11:44:23.521: INFO: Got endpoints: latency-svc-ch7q4 [3.957210281s] Jan 30 11:44:23.691: INFO: Created: latency-svc-cpj6k Jan 30 11:44:23.793: INFO: Got endpoints: latency-svc-cpj6k [4.179576786s] Jan 30 11:44:23.923: INFO: Created: latency-svc-zzr6b Jan 30 11:44:24.046: INFO: Got endpoints: latency-svc-zzr6b [4.270941136s] Jan 30 11:44:24.138: INFO: Created: latency-svc-bxwc4 Jan 30 11:44:24.160: INFO: Got endpoints: latency-svc-bxwc4 [4.004125693s] Jan 30 11:44:24.327: INFO: Created: latency-svc-mtqqz Jan 30 11:44:24.495: INFO: Created: latency-svc-7jqjp Jan 30 11:44:24.514: INFO: Got endpoints: latency-svc-7jqjp [4.12673442s] Jan 30 11:44:24.515: INFO: Got endpoints: latency-svc-mtqqz [4.292651224s] Jan 30 11:44:24.634: INFO: Created: latency-svc-zdbwt Jan 30 11:44:24.689: INFO: Got endpoints: latency-svc-zdbwt [4.096083091s] Jan 30 11:44:24.727: INFO: Created: latency-svc-52q6q Jan 30 11:44:24.750: INFO: Got endpoints: latency-svc-52q6q [4.128195397s] Jan 30 11:44:24.870: INFO: Created: latency-svc-zs896 Jan 30 11:44:24.893: INFO: Got endpoints: latency-svc-zs896 [4.056016472s] Jan 30 11:44:25.112: INFO: Created: latency-svc-v927c Jan 30 11:44:25.118: INFO: Got endpoints: latency-svc-v927c [4.142564893s] Jan 30 11:44:25.181: INFO: Created: latency-svc-6crs6 Jan 30 11:44:25.364: INFO: Got endpoints: latency-svc-6crs6 [4.223528454s] Jan 30 11:44:25.420: INFO: Created: latency-svc-5z2ng Jan 30 11:44:25.433: INFO: Got endpoints: latency-svc-5z2ng [4.245226842s] Jan 30 11:44:25.538: INFO: Created: latency-svc-7lfkv Jan 30 11:44:25.592: INFO: Got endpoints: latency-svc-7lfkv [3.05183105s] Jan 30 11:44:25.607: INFO: Created: latency-svc-d74dn Jan 30 11:44:25.613: INFO: Got endpoints: latency-svc-d74dn [2.807633777s] Jan 30 11:44:25.782: INFO: Created: latency-svc-4j8g5 Jan 30 11:44:25.799: INFO: Got endpoints: latency-svc-4j8g5 [2.445436423s] Jan 30 11:44:25.864: INFO: Created: latency-svc-rms96 Jan 30 11:44:26.039: INFO: Created: latency-svc-dhfjm Jan 30 11:44:26.042: INFO: Got endpoints: latency-svc-rms96 [2.520423253s] Jan 30 11:44:26.054: INFO: Got endpoints: latency-svc-dhfjm [2.259996378s] Jan 30 11:44:26.114: INFO: Created: latency-svc-226zf Jan 30 11:44:26.227: INFO: Got endpoints: latency-svc-226zf [2.179971363s] Jan 30 11:44:26.252: INFO: Created: latency-svc-sfnsq Jan 30 11:44:26.284: INFO: Got endpoints: latency-svc-sfnsq [2.123664027s] Jan 30 11:44:26.295: INFO: Created: latency-svc-sml7l Jan 30 11:44:26.305: INFO: Got endpoints: latency-svc-sml7l [1.789159786s] Jan 30 11:44:26.417: INFO: Created: latency-svc-sjp6r Jan 30 11:44:26.437: INFO: Got endpoints: latency-svc-sjp6r [1.921750165s] Jan 30 11:44:26.649: INFO: Created: latency-svc-47jql Jan 30 11:44:26.650: INFO: Got endpoints: latency-svc-47jql [1.960052483s] Jan 30 11:44:26.897: INFO: Created: latency-svc-j4cfq Jan 30 11:44:26.949: INFO: Got endpoints: latency-svc-j4cfq [2.198894027s] Jan 30 11:44:26.968: INFO: Created: latency-svc-dwl5d Jan 30 11:44:27.139: INFO: Got endpoints: latency-svc-dwl5d [2.246068662s] Jan 30 11:44:27.198: INFO: Created: latency-svc-2ths8 Jan 30 11:44:27.404: INFO: Got endpoints: latency-svc-2ths8 [2.285593823s] Jan 30 11:44:27.453: INFO: Created: latency-svc-9r5qp Jan 30 11:44:27.453: INFO: Got endpoints: latency-svc-9r5qp [2.089137613s] Jan 30 11:44:27.489: INFO: Created: latency-svc-7d6ns Jan 30 11:44:27.494: INFO: Got endpoints: latency-svc-7d6ns [2.060454046s] Jan 30 11:44:27.611: INFO: Created: latency-svc-62qbn Jan 30 11:44:27.634: INFO: Got endpoints: latency-svc-62qbn [2.041970292s] Jan 30 11:44:27.681: INFO: Created: latency-svc-spctx Jan 30 11:44:27.693: INFO: Got endpoints: latency-svc-spctx [2.079681129s] Jan 30 11:44:27.791: INFO: Created: latency-svc-sswx4 Jan 30 11:44:27.807: INFO: Got endpoints: latency-svc-sswx4 [2.007616968s] Jan 30 11:44:27.869: INFO: Created: latency-svc-jmdcj Jan 30 11:44:27.959: INFO: Got endpoints: latency-svc-jmdcj [1.916773499s] Jan 30 11:44:28.000: INFO: Created: latency-svc-t5r55 Jan 30 11:44:28.000: INFO: Got endpoints: latency-svc-t5r55 [1.945894343s] Jan 30 11:44:28.210: INFO: Created: latency-svc-zkr8j Jan 30 11:44:28.233: INFO: Got endpoints: latency-svc-zkr8j [2.005633175s] Jan 30 11:44:28.284: INFO: Created: latency-svc-9lzp8 Jan 30 11:44:28.292: INFO: Got endpoints: latency-svc-9lzp8 [2.008002618s] Jan 30 11:44:28.405: INFO: Created: latency-svc-v59g7 Jan 30 11:44:28.443: INFO: Got endpoints: latency-svc-v59g7 [2.138043713s] Jan 30 11:44:28.483: INFO: Created: latency-svc-qn7md Jan 30 11:44:28.612: INFO: Got endpoints: latency-svc-qn7md [2.175590881s] Jan 30 11:44:28.655: INFO: Created: latency-svc-l2ft4 Jan 30 11:44:28.695: INFO: Created: latency-svc-7mcn7 Jan 30 11:44:28.697: INFO: Got endpoints: latency-svc-l2ft4 [2.04764165s] Jan 30 11:44:28.837: INFO: Got endpoints: latency-svc-7mcn7 [1.887852454s] Jan 30 11:44:28.868: INFO: Created: latency-svc-r4f7z Jan 30 11:44:28.892: INFO: Got endpoints: latency-svc-r4f7z [1.752846956s] Jan 30 11:44:29.038: INFO: Created: latency-svc-b4k95 Jan 30 11:44:29.065: INFO: Got endpoints: latency-svc-b4k95 [1.661387644s] Jan 30 11:44:29.283: INFO: Created: latency-svc-c5t5d Jan 30 11:44:29.290: INFO: Got endpoints: latency-svc-c5t5d [1.836429887s] Jan 30 11:44:29.290: INFO: Created: latency-svc-22dss Jan 30 11:44:29.313: INFO: Got endpoints: latency-svc-22dss [1.818859326s] Jan 30 11:44:29.555: INFO: Created: latency-svc-24k68 Jan 30 11:44:29.619: INFO: Got endpoints: latency-svc-24k68 [1.984475323s] Jan 30 11:44:29.774: INFO: Created: latency-svc-dd5zj Jan 30 11:44:29.799: INFO: Got endpoints: latency-svc-dd5zj [2.106120223s] Jan 30 11:44:29.987: INFO: Created: latency-svc-cqxlq Jan 30 11:44:30.004: INFO: Got endpoints: latency-svc-cqxlq [2.196247037s] Jan 30 11:44:30.261: INFO: Created: latency-svc-gbsgt Jan 30 11:44:30.262: INFO: Got endpoints: latency-svc-gbsgt [2.302280301s] Jan 30 11:44:30.475: INFO: Created: latency-svc-xg2wp Jan 30 11:44:30.525: INFO: Got endpoints: latency-svc-xg2wp [2.524346455s] Jan 30 11:44:30.788: INFO: Created: latency-svc-vpvkt Jan 30 11:44:30.813: INFO: Got endpoints: latency-svc-vpvkt [2.579863795s] Jan 30 11:44:30.874: INFO: Created: latency-svc-swz97 Jan 30 11:44:30.982: INFO: Got endpoints: latency-svc-swz97 [2.690141468s] Jan 30 11:44:31.000: INFO: Created: latency-svc-drhbf Jan 30 11:44:31.022: INFO: Got endpoints: latency-svc-drhbf [2.57867024s] Jan 30 11:44:31.072: INFO: Created: latency-svc-7wg78 Jan 30 11:44:31.182: INFO: Got endpoints: latency-svc-7wg78 [2.568964216s] Jan 30 11:44:31.204: INFO: Created: latency-svc-f5rw2 Jan 30 11:44:31.228: INFO: Got endpoints: latency-svc-f5rw2 [2.530356161s] Jan 30 11:44:31.542: INFO: Created: latency-svc-w78n4 Jan 30 11:44:31.806: INFO: Got endpoints: latency-svc-w78n4 [2.967428668s] Jan 30 11:44:32.261: INFO: Created: latency-svc-mlcxg Jan 30 11:44:32.261: INFO: Got endpoints: latency-svc-mlcxg [3.368674315s] Jan 30 11:44:32.478: INFO: Created: latency-svc-np6vz Jan 30 11:44:32.479: INFO: Got endpoints: latency-svc-np6vz [3.413009608s] Jan 30 11:44:32.654: INFO: Created: latency-svc-wc94s Jan 30 11:44:32.699: INFO: Got endpoints: latency-svc-wc94s [3.408619016s] Jan 30 11:44:33.250: INFO: Created: latency-svc-ndzxt Jan 30 11:44:33.301: INFO: Got endpoints: latency-svc-ndzxt [3.987952799s] Jan 30 11:44:33.509: INFO: Created: latency-svc-qtc66 Jan 30 11:44:33.519: INFO: Got endpoints: latency-svc-qtc66 [3.899430208s] Jan 30 11:44:33.559: INFO: Created: latency-svc-5rcw9 Jan 30 11:44:33.568: INFO: Got endpoints: latency-svc-5rcw9 [3.768137129s] Jan 30 11:44:33.727: INFO: Created: latency-svc-fk2xm Jan 30 11:44:33.751: INFO: Got endpoints: latency-svc-fk2xm [3.747383484s] Jan 30 11:44:33.807: INFO: Created: latency-svc-b5ns4 Jan 30 11:44:33.959: INFO: Got endpoints: latency-svc-b5ns4 [3.697243904s] Jan 30 11:44:34.000: INFO: Created: latency-svc-7j4v7 Jan 30 11:44:34.027: INFO: Got endpoints: latency-svc-7j4v7 [3.502109354s] Jan 30 11:44:34.434: INFO: Created: latency-svc-qlh5x Jan 30 11:44:34.445: INFO: Got endpoints: latency-svc-qlh5x [3.631721263s] Jan 30 11:44:34.603: INFO: Created: latency-svc-8b72c Jan 30 11:44:34.783: INFO: Created: latency-svc-jt957 Jan 30 11:44:34.797: INFO: Got endpoints: latency-svc-jt957 [3.774524108s] Jan 30 11:44:34.804: INFO: Got endpoints: latency-svc-8b72c [3.821260935s] Jan 30 11:44:34.874: INFO: Created: latency-svc-68f5r Jan 30 11:44:34.970: INFO: Got endpoints: latency-svc-68f5r [3.787148845s] Jan 30 11:44:35.004: INFO: Created: latency-svc-z6q8s Jan 30 11:44:35.027: INFO: Got endpoints: latency-svc-z6q8s [3.798751944s] Jan 30 11:44:35.075: INFO: Created: latency-svc-xvh5r Jan 30 11:44:35.235: INFO: Got endpoints: latency-svc-xvh5r [3.429255171s] Jan 30 11:44:35.252: INFO: Created: latency-svc-bf5gl Jan 30 11:44:35.268: INFO: Got endpoints: latency-svc-bf5gl [3.006542781s] Jan 30 11:44:35.437: INFO: Created: latency-svc-bcs9s Jan 30 11:44:35.492: INFO: Got endpoints: latency-svc-bcs9s [3.01352178s] Jan 30 11:44:35.505: INFO: Created: latency-svc-tg86s Jan 30 11:44:35.614: INFO: Got endpoints: latency-svc-tg86s [2.914374938s] Jan 30 11:44:35.647: INFO: Created: latency-svc-tgsbm Jan 30 11:44:35.673: INFO: Got endpoints: latency-svc-tgsbm [2.371416549s] Jan 30 11:44:35.836: INFO: Created: latency-svc-z5kc4 Jan 30 11:44:35.870: INFO: Got endpoints: latency-svc-z5kc4 [2.350453534s] Jan 30 11:44:36.118: INFO: Created: latency-svc-f4dvw Jan 30 11:44:36.151: INFO: Got endpoints: latency-svc-f4dvw [2.583648197s] Jan 30 11:44:36.334: INFO: Created: latency-svc-kplgs Jan 30 11:44:36.354: INFO: Got endpoints: latency-svc-kplgs [2.60221988s] Jan 30 11:44:36.533: INFO: Created: latency-svc-85pk7 Jan 30 11:44:36.535: INFO: Got endpoints: latency-svc-85pk7 [2.575516371s] Jan 30 11:44:36.671: INFO: Created: latency-svc-9zs8g Jan 30 11:44:36.688: INFO: Got endpoints: latency-svc-9zs8g [2.660082581s] Jan 30 11:44:36.904: INFO: Created: latency-svc-bh74b Jan 30 11:44:36.941: INFO: Got endpoints: latency-svc-bh74b [2.495497133s] Jan 30 11:44:37.232: INFO: Created: latency-svc-zf9zm Jan 30 11:44:37.280: INFO: Got endpoints: latency-svc-zf9zm [2.482376036s] Jan 30 11:44:37.584: INFO: Created: latency-svc-jcswg Jan 30 11:44:37.584: INFO: Got endpoints: latency-svc-jcswg [2.779720504s] Jan 30 11:44:37.775: INFO: Created: latency-svc-rmwb2 Jan 30 11:44:37.828: INFO: Created: latency-svc-vlzbn Jan 30 11:44:37.936: INFO: Got endpoints: latency-svc-rmwb2 [2.965961279s] Jan 30 11:44:37.939: INFO: Got endpoints: latency-svc-vlzbn [2.911549119s] Jan 30 11:44:37.968: INFO: Created: latency-svc-t69g5 Jan 30 11:44:37.975: INFO: Got endpoints: latency-svc-t69g5 [2.739699433s] Jan 30 11:44:38.210: INFO: Created: latency-svc-fwb4x Jan 30 11:44:38.210: INFO: Got endpoints: latency-svc-fwb4x [2.942265147s] Jan 30 11:44:38.453: INFO: Created: latency-svc-4rwwp Jan 30 11:44:38.527: INFO: Got endpoints: latency-svc-4rwwp [3.03399166s] Jan 30 11:44:38.669: INFO: Created: latency-svc-g7hrg Jan 30 11:44:38.670: INFO: Got endpoints: latency-svc-g7hrg [3.055206875s] Jan 30 11:44:38.849: INFO: Created: latency-svc-7cwt2 Jan 30 11:44:38.892: INFO: Got endpoints: latency-svc-7cwt2 [3.219012021s] Jan 30 11:44:39.058: INFO: Created: latency-svc-nbmg8 Jan 30 11:44:39.089: INFO: Got endpoints: latency-svc-nbmg8 [3.218802752s] Jan 30 11:44:39.264: INFO: Created: latency-svc-2p2g8 Jan 30 11:44:39.281: INFO: Got endpoints: latency-svc-2p2g8 [3.129064753s] Jan 30 11:44:39.472: INFO: Created: latency-svc-tlb2d Jan 30 11:44:39.478: INFO: Got endpoints: latency-svc-tlb2d [3.123653464s] Jan 30 11:44:39.670: INFO: Created: latency-svc-5h8nt Jan 30 11:44:39.683: INFO: Got endpoints: latency-svc-5h8nt [3.147419254s] Jan 30 11:44:39.876: INFO: Created: latency-svc-pvqjq Jan 30 11:44:39.880: INFO: Got endpoints: latency-svc-pvqjq [3.192757608s] Jan 30 11:44:39.997: INFO: Created: latency-svc-wlh4v Jan 30 11:44:40.018: INFO: Got endpoints: latency-svc-wlh4v [3.076431335s] Jan 30 11:44:40.070: INFO: Created: latency-svc-kd4q2 Jan 30 11:44:40.090: INFO: Got endpoints: latency-svc-kd4q2 [2.809921819s] Jan 30 11:44:40.255: INFO: Created: latency-svc-hgjds Jan 30 11:44:40.285: INFO: Got endpoints: latency-svc-hgjds [2.700330105s] Jan 30 11:44:40.506: INFO: Created: latency-svc-dhlxp Jan 30 11:44:40.643: INFO: Got endpoints: latency-svc-dhlxp [2.706340008s] Jan 30 11:44:40.725: INFO: Created: latency-svc-d969r Jan 30 11:44:40.882: INFO: Got endpoints: latency-svc-d969r [2.942580654s] Jan 30 11:44:40.893: INFO: Created: latency-svc-ckhpf Jan 30 11:44:40.904: INFO: Got endpoints: latency-svc-ckhpf [2.928167599s] Jan 30 11:44:41.158: INFO: Created: latency-svc-n8z2l Jan 30 11:44:41.164: INFO: Got endpoints: latency-svc-n8z2l [2.952998536s] Jan 30 11:44:41.340: INFO: Created: latency-svc-pjkwp Jan 30 11:44:41.354: INFO: Got endpoints: latency-svc-pjkwp [2.82691159s] Jan 30 11:44:41.421: INFO: Created: latency-svc-ch4v4 Jan 30 11:44:41.583: INFO: Got endpoints: latency-svc-ch4v4 [2.912565264s] Jan 30 11:44:41.583: INFO: Latencies: [154.444688ms 336.295171ms 348.790857ms 377.401255ms 419.959453ms 516.048708ms 822.710306ms 904.92511ms 1.068630534s 1.330176095s 1.569482244s 1.584360122s 1.623197478s 1.661387644s 1.740264395s 1.743963473s 1.752846956s 1.753390521s 1.789159786s 1.789541861s 1.818859326s 1.836429887s 1.876315139s 1.887852454s 1.916773499s 1.921036174s 1.921750165s 1.945894343s 1.960052483s 1.975854359s 1.984393242s 1.984475323s 2.005633175s 2.007616968s 2.008002618s 2.041970292s 2.04764165s 2.060454046s 2.079681129s 2.089137613s 2.106120223s 2.107598278s 2.123664027s 2.138043713s 2.175590881s 2.179971363s 2.196247037s 2.198894027s 2.237651936s 2.246068662s 2.250884372s 2.259996378s 2.285593823s 2.301258962s 2.302280301s 2.312820282s 2.350453534s 2.35497843s 2.371416549s 2.374721756s 2.384059673s 2.391606687s 2.392078851s 2.423113573s 2.445436423s 2.447943625s 2.45334887s 2.456488918s 2.463718179s 2.482376036s 2.495497133s 2.497058509s 2.520423253s 2.524346455s 2.530356161s 2.532400998s 2.54087536s 2.56833047s 2.568964216s 2.574264023s 2.575516371s 2.57867024s 2.579863795s 2.583648197s 2.586264348s 2.60221988s 2.616582353s 2.660082581s 2.685676699s 2.688588472s 2.690141468s 2.700330105s 2.702783049s 2.703099031s 2.706340008s 2.714303374s 2.717546692s 2.718557295s 2.725021189s 2.73093842s 2.73671602s 2.739699433s 2.749778148s 2.766847975s 2.773095747s 2.778372039s 2.779720504s 2.790712157s 2.792428401s 2.807633777s 2.809921819s 2.814486665s 2.82691159s 2.839961682s 2.859830647s 2.873539794s 2.897452702s 2.911549119s 2.912565264s 2.914374938s 2.916564145s 2.916881947s 2.928167599s 2.936418938s 2.942265147s 2.942580654s 2.947594069s 2.94865409s 2.952998536s 2.959887099s 2.962613838s 2.965961279s 2.967176695s 2.967428668s 2.978077848s 2.978243182s 2.995747523s 2.996428102s 2.999982188s 3.006542781s 3.01352178s 3.03399166s 3.05183105s 3.055206875s 3.076431335s 3.078060566s 3.093660499s 3.123653464s 3.129064753s 3.13192405s 3.147419254s 3.153099609s 3.166369692s 3.174201267s 3.190744014s 3.192757608s 3.218802752s 3.219012021s 3.241051232s 3.269136798s 3.297520287s 3.368674315s 3.408619016s 3.413009608s 3.429255171s 3.469783592s 3.502109354s 3.559413206s 3.56094253s 3.562981044s 3.566493972s 3.574608704s 3.587525039s 3.602613088s 3.629678919s 3.631721263s 3.632459367s 3.697243904s 3.721064509s 3.747383484s 3.768137129s 3.774524108s 3.787148845s 3.798751944s 3.821260935s 3.899430208s 3.957210281s 3.987952799s 3.991344768s 4.004125693s 4.056016472s 4.096083091s 4.12673442s 4.128195397s 4.142564893s 4.179576786s 4.223528454s 4.245226842s 4.270941136s 4.292651224s] Jan 30 11:44:41.584: INFO: 50 %ile: 2.73671602s Jan 30 11:44:41.584: INFO: 90 %ile: 3.768137129s Jan 30 11:44:41.584: INFO: 99 %ile: 4.270941136s Jan 30 11:44:41.584: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:44:41.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-qqhbb" for this suite. Jan 30 11:45:45.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:45:45.749: INFO: namespace: e2e-tests-svc-latency-qqhbb, resource: bindings, ignored listing per whitelist Jan 30 11:45:45.806: INFO: namespace e2e-tests-svc-latency-qqhbb deletion completed in 1m4.2075802s • [SLOW TEST:109.377 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:45:45.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0130 11:45:47.923261 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 11:45:47.923: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:45:47.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-vwsqb" for this suite. Jan 30 11:45:54.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:45:54.338: INFO: namespace: e2e-tests-gc-vwsqb, resource: bindings, ignored listing per whitelist Jan 30 11:45:54.418: INFO: namespace e2e-tests-gc-vwsqb deletion completed in 6.489820994s • [SLOW TEST:8.612 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:45:54.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jan 30 11:46:02.702: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-12916006-4356-11ea-a47a-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-55nvt", SelfLink:"/api/v1/namespaces/e2e-tests-pods-55nvt/pods/pod-submit-remove-12916006-4356-11ea-a47a-0242ac110005", UID:"1298d132-4356-11ea-a994-fa163e34d433", ResourceVersion:"19965823", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715981554, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"598738760"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-j9rfn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00200cfc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j9rfn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022d7568), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025692c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022d75a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022d75c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022d75c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022d75cc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715981554, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715981561, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715981561, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715981554, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001de5c60), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001de5c80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://464c6a2c949dc7a693b2d172750789806cf77800f9fb9f113753cdf6909bc08a"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:46:12.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-55nvt" for this suite. Jan 30 11:46:18.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:46:18.802: INFO: namespace: e2e-tests-pods-55nvt, resource: bindings, ignored listing per whitelist Jan 30 11:46:18.906: INFO: namespace e2e-tests-pods-55nvt deletion completed in 6.273975894s • [SLOW TEST:24.486 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:46:18.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 30 11:46:19.167: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f5gds,SelfLink:/api/v1/namespaces/e2e-tests-watch-f5gds/configmaps/e2e-watch-test-label-changed,UID:21254463-4356-11ea-a994-fa163e34d433,ResourceVersion:19965864,Generation:0,CreationTimestamp:2020-01-30 11:46:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 30 11:46:19.168: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f5gds,SelfLink:/api/v1/namespaces/e2e-tests-watch-f5gds/configmaps/e2e-watch-test-label-changed,UID:21254463-4356-11ea-a994-fa163e34d433,ResourceVersion:19965865,Generation:0,CreationTimestamp:2020-01-30 11:46:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 30 11:46:19.168: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f5gds,SelfLink:/api/v1/namespaces/e2e-tests-watch-f5gds/configmaps/e2e-watch-test-label-changed,UID:21254463-4356-11ea-a994-fa163e34d433,ResourceVersion:19965866,Generation:0,CreationTimestamp:2020-01-30 11:46:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 30 11:46:29.270: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f5gds,SelfLink:/api/v1/namespaces/e2e-tests-watch-f5gds/configmaps/e2e-watch-test-label-changed,UID:21254463-4356-11ea-a994-fa163e34d433,ResourceVersion:19965880,Generation:0,CreationTimestamp:2020-01-30 11:46:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 30 11:46:29.271: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f5gds,SelfLink:/api/v1/namespaces/e2e-tests-watch-f5gds/configmaps/e2e-watch-test-label-changed,UID:21254463-4356-11ea-a994-fa163e34d433,ResourceVersion:19965881,Generation:0,CreationTimestamp:2020-01-30 11:46:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 30 11:46:29.271: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-f5gds,SelfLink:/api/v1/namespaces/e2e-tests-watch-f5gds/configmaps/e2e-watch-test-label-changed,UID:21254463-4356-11ea-a994-fa163e34d433,ResourceVersion:19965882,Generation:0,CreationTimestamp:2020-01-30 11:46:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:46:29.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-f5gds" for this suite. Jan 30 11:46:35.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:46:35.472: INFO: namespace: e2e-tests-watch-f5gds, resource: bindings, ignored listing per whitelist Jan 30 11:46:35.479: INFO: namespace e2e-tests-watch-f5gds deletion completed in 6.198424589s • [SLOW TEST:16.573 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:46:35.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 30 11:46:46.287: INFO: Successfully updated pod "pod-update-2b0eb374-4356-11ea-a47a-0242ac110005" STEP: verifying the updated pod is in kubernetes Jan 30 11:46:46.428: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:46:46.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-2b245" for this suite. Jan 30 11:47:08.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:47:08.617: INFO: namespace: e2e-tests-pods-2b245, resource: bindings, ignored listing per whitelist Jan 30 11:47:08.709: INFO: namespace e2e-tests-pods-2b245 deletion completed in 22.257533082s • [SLOW TEST:33.229 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:47:08.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 30 11:47:09.004: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-cs2qx,SelfLink:/api/v1/namespaces/e2e-tests-watch-cs2qx/configmaps/e2e-watch-test-resource-version,UID:3ed8aa96-4356-11ea-a994-fa163e34d433,ResourceVersion:19965967,Generation:0,CreationTimestamp:2020-01-30 11:47:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 30 11:47:09.005: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-cs2qx,SelfLink:/api/v1/namespaces/e2e-tests-watch-cs2qx/configmaps/e2e-watch-test-resource-version,UID:3ed8aa96-4356-11ea-a994-fa163e34d433,ResourceVersion:19965968,Generation:0,CreationTimestamp:2020-01-30 11:47:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:47:09.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-cs2qx" for this suite. Jan 30 11:47:15.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:47:15.461: INFO: namespace: e2e-tests-watch-cs2qx, resource: bindings, ignored listing per whitelist Jan 30 11:47:16.042: INFO: namespace e2e-tests-watch-cs2qx deletion completed in 7.026568763s • [SLOW TEST:7.332 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:47:16.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:47:27.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-nngc8" for this suite. Jan 30 11:47:51.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:47:51.711: INFO: namespace: e2e-tests-replication-controller-nngc8, resource: bindings, ignored listing per whitelist Jan 30 11:47:51.772: INFO: namespace e2e-tests-replication-controller-nngc8 deletion completed in 24.311994114s • [SLOW TEST:35.730 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:47:51.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:47:52.177: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"588d61ed-4356-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0025ff8b2), BlockOwnerDeletion:(*bool)(0xc0025ff8b3)}} Jan 30 11:47:52.216: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"588a7d6b-4356-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001e18192), BlockOwnerDeletion:(*bool)(0xc001e18193)}} Jan 30 11:47:52.311: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"588befd1-4356-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001716bf2), BlockOwnerDeletion:(*bool)(0xc001716bf3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:47:57.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-67wcm" for this suite. Jan 30 11:48:03.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:48:03.645: INFO: namespace: e2e-tests-gc-67wcm, resource: bindings, ignored listing per whitelist Jan 30 11:48:03.942: INFO: namespace e2e-tests-gc-67wcm deletion completed in 6.556439055s • [SLOW TEST:12.169 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:48:03.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 30 11:48:05.345: INFO: Pod name wrapped-volume-race-6078dae1-4356-11ea-a47a-0242ac110005: Found 0 pods out of 5 Jan 30 11:48:10.377: INFO: Pod name wrapped-volume-race-6078dae1-4356-11ea-a47a-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6078dae1-4356-11ea-a47a-0242ac110005 in namespace e2e-tests-emptydir-wrapper-wcjht, will wait for the garbage collector to delete the pods Jan 30 11:50:02.600: INFO: Deleting ReplicationController wrapped-volume-race-6078dae1-4356-11ea-a47a-0242ac110005 took: 49.961169ms Jan 30 11:50:02.901: INFO: Terminating ReplicationController wrapped-volume-race-6078dae1-4356-11ea-a47a-0242ac110005 pods took: 300.758633ms STEP: Creating RC which spawns configmap-volume pods Jan 30 11:50:53.441: INFO: Pod name wrapped-volume-race-c49c08d7-4356-11ea-a47a-0242ac110005: Found 0 pods out of 5 Jan 30 11:50:58.495: INFO: Pod name wrapped-volume-race-c49c08d7-4356-11ea-a47a-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c49c08d7-4356-11ea-a47a-0242ac110005 in namespace e2e-tests-emptydir-wrapper-wcjht, will wait for the garbage collector to delete the pods Jan 30 11:53:14.828: INFO: Deleting ReplicationController wrapped-volume-race-c49c08d7-4356-11ea-a47a-0242ac110005 took: 47.722697ms Jan 30 11:53:15.129: INFO: Terminating ReplicationController wrapped-volume-race-c49c08d7-4356-11ea-a47a-0242ac110005 pods took: 301.042277ms STEP: Creating RC which spawns configmap-volume pods Jan 30 11:54:03.261: INFO: Pod name wrapped-volume-race-35c16872-4357-11ea-a47a-0242ac110005: Found 0 pods out of 5 Jan 30 11:54:08.337: INFO: Pod name wrapped-volume-race-35c16872-4357-11ea-a47a-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-35c16872-4357-11ea-a47a-0242ac110005 in namespace e2e-tests-emptydir-wrapper-wcjht, will wait for the garbage collector to delete the pods Jan 30 11:56:22.535: INFO: Deleting ReplicationController wrapped-volume-race-35c16872-4357-11ea-a47a-0242ac110005 took: 52.542797ms Jan 30 11:56:22.936: INFO: Terminating ReplicationController wrapped-volume-race-35c16872-4357-11ea-a47a-0242ac110005 pods took: 400.977881ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:57:15.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-wcjht" for this suite. Jan 30 11:57:25.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:57:25.307: INFO: namespace: e2e-tests-emptydir-wrapper-wcjht, resource: bindings, ignored listing per whitelist Jan 30 11:57:25.485: INFO: namespace e2e-tests-emptydir-wrapper-wcjht deletion completed in 10.243335405s • [SLOW TEST:561.540 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:57:25.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 11:57:25.860: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 30 11:57:26.225: INFO: Number of nodes with available pods: 0 Jan 30 11:57:26.225: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:27.241: INFO: Number of nodes with available pods: 0 Jan 30 11:57:27.241: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:29.525: INFO: Number of nodes with available pods: 0 Jan 30 11:57:29.525: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:30.578: INFO: Number of nodes with available pods: 0 Jan 30 11:57:30.578: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:31.262: INFO: Number of nodes with available pods: 0 Jan 30 11:57:31.262: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:32.293: INFO: Number of nodes with available pods: 0 Jan 30 11:57:32.293: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:33.248: INFO: Number of nodes with available pods: 0 Jan 30 11:57:33.248: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:34.637: INFO: Number of nodes with available pods: 0 Jan 30 11:57:34.637: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:35.246: INFO: Number of nodes with available pods: 0 Jan 30 11:57:35.246: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:36.256: INFO: Number of nodes with available pods: 0 Jan 30 11:57:36.256: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:37.297: INFO: Number of nodes with available pods: 1 Jan 30 11:57:37.297: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 30 11:57:37.475: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:38.517: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:39.510: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:40.523: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:41.496: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:42.516: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:43.503: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:43.503: INFO: Pod daemon-set-68zz4 is not available Jan 30 11:57:44.549: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:44.550: INFO: Pod daemon-set-68zz4 is not available Jan 30 11:57:45.507: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:45.507: INFO: Pod daemon-set-68zz4 is not available Jan 30 11:57:46.523: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:46.523: INFO: Pod daemon-set-68zz4 is not available Jan 30 11:57:47.503: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:47.503: INFO: Pod daemon-set-68zz4 is not available Jan 30 11:57:48.512: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:48.512: INFO: Pod daemon-set-68zz4 is not available Jan 30 11:57:49.503: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:49.503: INFO: Pod daemon-set-68zz4 is not available Jan 30 11:57:50.510: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:50.510: INFO: Pod daemon-set-68zz4 is not available Jan 30 11:57:51.503: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:51.503: INFO: Pod daemon-set-68zz4 is not available Jan 30 11:57:52.513: INFO: Wrong image for pod: daemon-set-68zz4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 11:57:52.513: INFO: Pod daemon-set-68zz4 is not available Jan 30 11:57:53.514: INFO: Pod daemon-set-xnlj5 is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 30 11:57:53.533: INFO: Number of nodes with available pods: 0 Jan 30 11:57:53.533: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:54.843: INFO: Number of nodes with available pods: 0 Jan 30 11:57:54.844: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:55.562: INFO: Number of nodes with available pods: 0 Jan 30 11:57:55.562: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:56.576: INFO: Number of nodes with available pods: 0 Jan 30 11:57:56.576: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:57.558: INFO: Number of nodes with available pods: 0 Jan 30 11:57:57.558: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:58.876: INFO: Number of nodes with available pods: 0 Jan 30 11:57:58.877: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:57:59.551: INFO: Number of nodes with available pods: 0 Jan 30 11:57:59.551: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:58:00.624: INFO: Number of nodes with available pods: 0 Jan 30 11:58:00.624: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:58:01.568: INFO: Number of nodes with available pods: 0 Jan 30 11:58:01.568: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 30 11:58:02.604: INFO: Number of nodes with available pods: 1 Jan 30 11:58:02.604: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mj2cw, will wait for the garbage collector to delete the pods Jan 30 11:58:02.718: INFO: Deleting DaemonSet.extensions daemon-set took: 14.913255ms Jan 30 11:58:02.819: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.544689ms Jan 30 11:58:12.673: INFO: Number of nodes with available pods: 0 Jan 30 11:58:12.673: INFO: Number of running nodes: 0, number of available pods: 0 Jan 30 11:58:12.679: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mj2cw/daemonsets","resourceVersion":"19967307"},"items":null} Jan 30 11:58:12.682: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mj2cw/pods","resourceVersion":"19967307"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:58:12.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mj2cw" for this suite. Jan 30 11:58:18.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:58:18.841: INFO: namespace: e2e-tests-daemonsets-mj2cw, resource: bindings, ignored listing per whitelist Jan 30 11:58:18.999: INFO: namespace e2e-tests-daemonsets-mj2cw deletion completed in 6.303583114s • [SLOW TEST:53.513 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:58:18.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-l2w2b STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-l2w2b to expose endpoints map[] Jan 30 11:58:19.417: INFO: Get endpoints failed (77.015256ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 30 11:58:20.431: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-l2w2b exposes endpoints map[] (1.090912619s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-l2w2b STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-l2w2b to expose endpoints map[pod1:[100]] Jan 30 11:58:24.701: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.250226569s elapsed, will retry) Jan 30 11:58:28.805: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-l2w2b exposes endpoints map[pod1:[100]] (8.354207476s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-l2w2b STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-l2w2b to expose endpoints map[pod1:[100] pod2:[101]] Jan 30 11:58:34.074: INFO: Unexpected endpoints: found map[cf1fbf6f-4357-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.248612459s elapsed, will retry) Jan 30 11:58:37.249: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-l2w2b exposes endpoints map[pod1:[100] pod2:[101]] (8.423573965s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-l2w2b STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-l2w2b to expose endpoints map[pod2:[101]] Jan 30 11:58:38.328: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-l2w2b exposes endpoints map[pod2:[101]] (1.066876406s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-l2w2b STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-l2w2b to expose endpoints map[] Jan 30 11:58:39.509: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-l2w2b exposes endpoints map[] (1.15949818s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:58:39.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-l2w2b" for this suite. Jan 30 11:59:03.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:59:03.929: INFO: namespace: e2e-tests-services-l2w2b, resource: bindings, ignored listing per whitelist Jan 30 11:59:04.050: INFO: namespace e2e-tests-services-l2w2b deletion completed in 24.335882582s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:45.051 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:59:04.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e93a29fc-4357-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 30 11:59:04.317: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9444e37-4357-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-85spg" to be "success or failure" Jan 30 11:59:04.338: INFO: Pod "pod-projected-configmaps-e9444e37-4357-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.070761ms Jan 30 11:59:06.371: INFO: Pod "pod-projected-configmaps-e9444e37-4357-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053487585s Jan 30 11:59:08.391: INFO: Pod "pod-projected-configmaps-e9444e37-4357-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074035746s Jan 30 11:59:10.404: INFO: Pod "pod-projected-configmaps-e9444e37-4357-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086356224s Jan 30 11:59:12.421: INFO: Pod "pod-projected-configmaps-e9444e37-4357-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103802088s STEP: Saw pod success Jan 30 11:59:12.421: INFO: Pod "pod-projected-configmaps-e9444e37-4357-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:59:12.430: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e9444e37-4357-11ea-a47a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 30 11:59:12.562: INFO: Waiting for pod pod-projected-configmaps-e9444e37-4357-11ea-a47a-0242ac110005 to disappear Jan 30 11:59:12.706: INFO: Pod pod-projected-configmaps-e9444e37-4357-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:59:12.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-85spg" for this suite. Jan 30 11:59:18.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:59:18.955: INFO: namespace: e2e-tests-projected-85spg, resource: bindings, ignored listing per whitelist Jan 30 11:59:19.008: INFO: namespace e2e-tests-projected-85spg deletion completed in 6.280323995s • [SLOW TEST:14.956 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:59:19.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f2294a7a-4357-11ea-a47a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 30 11:59:19.257: INFO: Waiting up to 5m0s for pod "pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005" in namespace "e2e-tests-secrets-glmb6" to be "success or failure" Jan 30 11:59:19.376: INFO: Pod "pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 119.447088ms Jan 30 11:59:21.472: INFO: Pod "pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215263429s Jan 30 11:59:23.507: INFO: Pod "pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250473136s Jan 30 11:59:25.754: INFO: Pod "pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.497367482s Jan 30 11:59:27.781: INFO: Pod "pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524586978s Jan 30 11:59:29.795: INFO: Pod "pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.53839461s STEP: Saw pod success Jan 30 11:59:29.795: INFO: Pod "pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005" satisfied condition "success or failure" Jan 30 11:59:29.801: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 30 11:59:30.672: INFO: Waiting for pod pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005 to disappear Jan 30 11:59:30.817: INFO: Pod pod-secrets-f22bcf8b-4357-11ea-a47a-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:59:30.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-glmb6" for this suite. Jan 30 11:59:36.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 11:59:37.059: INFO: namespace: e2e-tests-secrets-glmb6, resource: bindings, ignored listing per whitelist Jan 30 11:59:37.085: INFO: namespace e2e-tests-secrets-glmb6 deletion completed in 6.253332894s • [SLOW TEST:18.076 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 11:59:37.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 30 11:59:48.427: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 11:59:49.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-xmd46" for this suite. Jan 30 12:00:14.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 12:00:14.703: INFO: namespace: e2e-tests-replicaset-xmd46, resource: bindings, ignored listing per whitelist Jan 30 12:00:14.778: INFO: namespace e2e-tests-replicaset-xmd46 deletion completed in 25.22289008s • [SLOW TEST:37.693 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 12:00:14.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 12:00:14.982: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 30 12:00:15.143: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 30 12:00:20.679: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 30 12:00:24.705: INFO: Creating deployment "test-rolling-update-deployment" Jan 30 12:00:24.722: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 30 12:00:24.740: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 30 12:00:26.759: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 30 12:00:26.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982425, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 12:00:28.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982425, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 12:00:30.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982425, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 12:00:32.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982432, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715982424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 12:00:34.779: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 30 12:00:34.819: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-lk6tx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lk6tx/deployments/test-rolling-update-deployment,UID:1931cc41-4358-11ea-a994-fa163e34d433,ResourceVersion:19967687,Generation:1,CreationTimestamp:2020-01-30 12:00:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-30 12:00:24 +0000 UTC 2020-01-30 12:00:24 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-30 12:00:32 +0000 UTC 2020-01-30 12:00:24 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 30 12:00:34.837: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-lk6tx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lk6tx/replicasets/test-rolling-update-deployment-75db98fb4c,UID:19394ae1-4358-11ea-a994-fa163e34d433,ResourceVersion:19967677,Generation:1,CreationTimestamp:2020-01-30 12:00:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1931cc41-4358-11ea-a994-fa163e34d433 0xc000b50b37 0xc000b50b38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 30 12:00:34.837: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 30 12:00:34.837: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-lk6tx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lk6tx/replicasets/test-rolling-update-controller,UID:13664674-4358-11ea-a994-fa163e34d433,ResourceVersion:19967685,Generation:2,CreationTimestamp:2020-01-30 12:00:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1931cc41-4358-11ea-a994-fa163e34d433 0xc000b50a5f 0xc000b50a70}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 30 12:00:34.851: INFO: Pod "test-rolling-update-deployment-75db98fb4c-v2b72" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-v2b72,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-lk6tx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lk6tx/pods/test-rolling-update-deployment-75db98fb4c-v2b72,UID:1947f4ce-4358-11ea-a994-fa163e34d433,ResourceVersion:19967676,Generation:0,CreationTimestamp:2020-01-30 12:00:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 19394ae1-4358-11ea-a994-fa163e34d433 0xc000b51417 0xc000b51418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q55vq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q55vq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-q55vq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b51480} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b514a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:00:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:00:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:00:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:00:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-30 12:00:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-30 12:00:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://1d3b796d107862e5897e8ce21d072c54eb183ef86d4ee924caea450e7a025eb8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 12:00:34.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-lk6tx" for this suite. Jan 30 12:00:43.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 12:00:43.661: INFO: namespace: e2e-tests-deployment-lk6tx, resource: bindings, ignored listing per whitelist Jan 30 12:00:43.746: INFO: namespace e2e-tests-deployment-lk6tx deletion completed in 8.874715173s • [SLOW TEST:28.968 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 12:00:43.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-hhs5 STEP: Creating a pod to test atomic-volume-subpath Jan 30 12:00:44.115: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hhs5" in namespace "e2e-tests-subpath-cs9bf" to be "success or failure" Jan 30 12:00:44.136: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.559243ms Jan 30 12:00:46.161: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045445725s Jan 30 12:00:48.184: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068743074s Jan 30 12:00:50.197: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081886054s Jan 30 12:00:52.211: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095306378s Jan 30 12:00:54.236: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120516932s Jan 30 12:00:56.323: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.207917559s Jan 30 12:00:58.341: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Running", Reason="", readiness=false. Elapsed: 14.225488728s Jan 30 12:01:00.361: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Running", Reason="", readiness=false. Elapsed: 16.246126827s Jan 30 12:01:02.397: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Running", Reason="", readiness=false. Elapsed: 18.281340216s Jan 30 12:01:04.412: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Running", Reason="", readiness=false. Elapsed: 20.296940766s Jan 30 12:01:06.431: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Running", Reason="", readiness=false. Elapsed: 22.316098326s Jan 30 12:01:08.461: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Running", Reason="", readiness=false. Elapsed: 24.345966361s Jan 30 12:01:10.490: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Running", Reason="", readiness=false. Elapsed: 26.374734108s Jan 30 12:01:12.546: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Running", Reason="", readiness=false. Elapsed: 28.430593527s Jan 30 12:01:14.573: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Running", Reason="", readiness=false. Elapsed: 30.45804983s Jan 30 12:01:16.605: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Running", Reason="", readiness=false. Elapsed: 32.489984546s Jan 30 12:01:18.673: INFO: Pod "pod-subpath-test-configmap-hhs5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.557475528s STEP: Saw pod success Jan 30 12:01:18.673: INFO: Pod "pod-subpath-test-configmap-hhs5" satisfied condition "success or failure" Jan 30 12:01:18.683: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-hhs5 container test-container-subpath-configmap-hhs5: STEP: delete the pod Jan 30 12:01:18.759: INFO: Waiting for pod pod-subpath-test-configmap-hhs5 to disappear Jan 30 12:01:18.849: INFO: Pod pod-subpath-test-configmap-hhs5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-hhs5 Jan 30 12:01:18.850: INFO: Deleting pod "pod-subpath-test-configmap-hhs5" in namespace "e2e-tests-subpath-cs9bf" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 30 12:01:18.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-cs9bf" for this suite. Jan 30 12:01:26.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 12:01:27.143: INFO: namespace: e2e-tests-subpath-cs9bf, resource: bindings, ignored listing per whitelist Jan 30 12:01:27.154: INFO: namespace e2e-tests-subpath-cs9bf deletion completed in 8.281618623s • [SLOW TEST:43.408 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 30 12:01:27.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 30 12:01:27.500: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 101.7744ms)
Jan 30 12:01:27.605: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 105.246131ms)
Jan 30 12:01:27.619: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.840115ms)
Jan 30 12:01:27.627: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.011606ms)
Jan 30 12:01:27.644: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.703274ms)
Jan 30 12:01:27.653: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.989395ms)
Jan 30 12:01:27.661: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.044847ms)
Jan 30 12:01:27.671: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.429547ms)
Jan 30 12:01:27.682: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.782586ms)
Jan 30 12:01:27.690: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.359656ms)
Jan 30 12:01:27.698: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.847645ms)
Jan 30 12:01:27.707: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.861429ms)
Jan 30 12:01:27.713: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.524585ms)
Jan 30 12:01:27.723: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.224385ms)
Jan 30 12:01:27.740: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.299252ms)
Jan 30 12:01:27.749: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.928507ms)
Jan 30 12:01:27.754: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.523941ms)
Jan 30 12:01:27.761: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.959287ms)
Jan 30 12:01:27.767: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.484384ms)
Jan 30 12:01:27.773: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.573306ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:01:27.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-ns8xq" for this suite.
Jan 30 12:01:33.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:01:34.043: INFO: namespace: e2e-tests-proxy-ns8xq, resource: bindings, ignored listing per whitelist
Jan 30 12:01:34.141: INFO: namespace e2e-tests-proxy-ns8xq deletion completed in 6.3623307s

• [SLOW TEST:6.986 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:01:34.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 30 12:01:34.376: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 30 12:01:39.695: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 30 12:01:44.231: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 30 12:01:44.275: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-kln8f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kln8f/deployments/test-cleanup-deployment,UID:489a4517-4358-11ea-a994-fa163e34d433,ResourceVersion:19967867,Generation:1,CreationTimestamp:2020-01-30 12:01:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 30 12:01:44.278: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:01:44.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-kln8f" for this suite.
Jan 30 12:01:53.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:01:53.559: INFO: namespace: e2e-tests-deployment-kln8f, resource: bindings, ignored listing per whitelist
Jan 30 12:01:54.072: INFO: namespace e2e-tests-deployment-kln8f deletion completed in 9.524763903s

• [SLOW TEST:19.931 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:01:54.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan 30 12:01:54.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bw9mj'
Jan 30 12:01:56.828: INFO: stderr: ""
Jan 30 12:01:56.828: INFO: stdout: "pod/pause created\n"
Jan 30 12:01:56.828: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 30 12:01:56.829: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-bw9mj" to be "running and ready"
Jan 30 12:01:56.908: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 78.610114ms
Jan 30 12:01:58.924: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095326786s
Jan 30 12:02:00.971: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142103443s
Jan 30 12:02:02.994: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165481163s
Jan 30 12:02:05.020: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.190638795s
Jan 30 12:02:05.020: INFO: Pod "pause" satisfied condition "running and ready"
Jan 30 12:02:05.020: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 30 12:02:05.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-bw9mj'
Jan 30 12:02:05.256: INFO: stderr: ""
Jan 30 12:02:05.256: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 30 12:02:05.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-bw9mj'
Jan 30 12:02:05.388: INFO: stderr: ""
Jan 30 12:02:05.388: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 30 12:02:05.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-bw9mj'
Jan 30 12:02:05.522: INFO: stderr: ""
Jan 30 12:02:05.523: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 30 12:02:05.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-bw9mj'
Jan 30 12:02:05.660: INFO: stderr: ""
Jan 30 12:02:05.660: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan 30 12:02:05.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bw9mj'
Jan 30 12:02:05.811: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 12:02:05.811: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 30 12:02:05.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-bw9mj'
Jan 30 12:02:05.978: INFO: stderr: "No resources found.\n"
Jan 30 12:02:05.979: INFO: stdout: ""
Jan 30 12:02:05.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-bw9mj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 30 12:02:06.152: INFO: stderr: ""
Jan 30 12:02:06.152: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:02:06.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bw9mj" for this suite.
Jan 30 12:02:12.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:02:12.273: INFO: namespace: e2e-tests-kubectl-bw9mj, resource: bindings, ignored listing per whitelist
Jan 30 12:02:12.327: INFO: namespace e2e-tests-kubectl-bw9mj deletion completed in 6.166539373s

• [SLOW TEST:18.255 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:02:12.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:02:20.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-55lwt" for this suite.
Jan 30 12:02:26.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:02:27.138: INFO: namespace: e2e-tests-kubelet-test-55lwt, resource: bindings, ignored listing per whitelist
Jan 30 12:02:27.138: INFO: namespace e2e-tests-kubelet-test-55lwt deletion completed in 6.353243867s

• [SLOW TEST:14.811 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:02:27.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-fzfqb
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-fzfqb
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-fzfqb
Jan 30 12:02:27.466: INFO: Found 0 stateful pods, waiting for 1
Jan 30 12:02:37.497: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 30 12:02:37.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 12:02:38.334: INFO: stderr: "I0130 12:02:37.752706    1187 log.go:172] (0xc00015c840) (0xc000625220) Create stream\nI0130 12:02:37.753182    1187 log.go:172] (0xc00015c840) (0xc000625220) Stream added, broadcasting: 1\nI0130 12:02:37.759488    1187 log.go:172] (0xc00015c840) Reply frame received for 1\nI0130 12:02:37.759543    1187 log.go:172] (0xc00015c840) (0xc0007ae000) Create stream\nI0130 12:02:37.759553    1187 log.go:172] (0xc00015c840) (0xc0007ae000) Stream added, broadcasting: 3\nI0130 12:02:37.760691    1187 log.go:172] (0xc00015c840) Reply frame received for 3\nI0130 12:02:37.760722    1187 log.go:172] (0xc00015c840) (0xc0003ca000) Create stream\nI0130 12:02:37.760741    1187 log.go:172] (0xc00015c840) (0xc0003ca000) Stream added, broadcasting: 5\nI0130 12:02:37.764303    1187 log.go:172] (0xc00015c840) Reply frame received for 5\nI0130 12:02:38.201322    1187 log.go:172] (0xc00015c840) Data frame received for 3\nI0130 12:02:38.201423    1187 log.go:172] (0xc0007ae000) (3) Data frame handling\nI0130 12:02:38.201446    1187 log.go:172] (0xc0007ae000) (3) Data frame sent\nI0130 12:02:38.320078    1187 log.go:172] (0xc00015c840) Data frame received for 1\nI0130 12:02:38.320279    1187 log.go:172] (0xc00015c840) (0xc0007ae000) Stream removed, broadcasting: 3\nI0130 12:02:38.320422    1187 log.go:172] (0xc000625220) (1) Data frame handling\nI0130 12:02:38.320454    1187 log.go:172] (0xc000625220) (1) Data frame sent\nI0130 12:02:38.320460    1187 log.go:172] (0xc00015c840) (0xc000625220) Stream removed, broadcasting: 1\nI0130 12:02:38.321480    1187 log.go:172] (0xc00015c840) (0xc0003ca000) Stream removed, broadcasting: 5\nI0130 12:02:38.321524    1187 log.go:172] (0xc00015c840) Go away received\nI0130 12:02:38.321692    1187 log.go:172] (0xc00015c840) (0xc000625220) Stream removed, broadcasting: 1\nI0130 12:02:38.321713    1187 log.go:172] (0xc00015c840) (0xc0007ae000) Stream removed, broadcasting: 3\nI0130 12:02:38.321722    1187 log.go:172] (0xc00015c840) (0xc0003ca000) Stream removed, broadcasting: 5\n"
Jan 30 12:02:38.335: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 12:02:38.335: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 12:02:38.350: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 30 12:02:48.368: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 12:02:48.368: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 12:02:48.424: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 30 12:02:48.424: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  }]
Jan 30 12:02:48.424: INFO: 
Jan 30 12:02:48.424: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 30 12:02:49.457: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982912503s
Jan 30 12:02:50.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.950166927s
Jan 30 12:02:51.765: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.676211832s
Jan 30 12:02:52.896: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.641777563s
Jan 30 12:02:53.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.510608177s
Jan 30 12:02:54.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.474859051s
Jan 30 12:02:56.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.440843507s
Jan 30 12:02:58.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 801.028222ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-fzfqb
Jan 30 12:02:59.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:03:00.172: INFO: stderr: "I0130 12:02:59.618256    1209 log.go:172] (0xc000884210) (0xc00087e5a0) Create stream\nI0130 12:02:59.619168    1209 log.go:172] (0xc000884210) (0xc00087e5a0) Stream added, broadcasting: 1\nI0130 12:02:59.634210    1209 log.go:172] (0xc000884210) Reply frame received for 1\nI0130 12:02:59.634303    1209 log.go:172] (0xc000884210) (0xc0006e8000) Create stream\nI0130 12:02:59.634315    1209 log.go:172] (0xc000884210) (0xc0006e8000) Stream added, broadcasting: 3\nI0130 12:02:59.636114    1209 log.go:172] (0xc000884210) Reply frame received for 3\nI0130 12:02:59.636144    1209 log.go:172] (0xc000884210) (0xc0002a4dc0) Create stream\nI0130 12:02:59.636154    1209 log.go:172] (0xc000884210) (0xc0002a4dc0) Stream added, broadcasting: 5\nI0130 12:02:59.637975    1209 log.go:172] (0xc000884210) Reply frame received for 5\nI0130 12:02:59.960180    1209 log.go:172] (0xc000884210) Data frame received for 3\nI0130 12:02:59.960323    1209 log.go:172] (0xc0006e8000) (3) Data frame handling\nI0130 12:02:59.960360    1209 log.go:172] (0xc0006e8000) (3) Data frame sent\nI0130 12:03:00.163953    1209 log.go:172] (0xc000884210) (0xc0002a4dc0) Stream removed, broadcasting: 5\nI0130 12:03:00.164109    1209 log.go:172] (0xc000884210) Data frame received for 1\nI0130 12:03:00.164202    1209 log.go:172] (0xc000884210) (0xc0006e8000) Stream removed, broadcasting: 3\nI0130 12:03:00.164266    1209 log.go:172] (0xc00087e5a0) (1) Data frame handling\nI0130 12:03:00.164313    1209 log.go:172] (0xc00087e5a0) (1) Data frame sent\nI0130 12:03:00.164329    1209 log.go:172] (0xc000884210) (0xc00087e5a0) Stream removed, broadcasting: 1\nI0130 12:03:00.164348    1209 log.go:172] (0xc000884210) Go away received\nI0130 12:03:00.165051    1209 log.go:172] (0xc000884210) (0xc00087e5a0) Stream removed, broadcasting: 1\nI0130 12:03:00.165071    1209 log.go:172] (0xc000884210) (0xc0006e8000) Stream removed, broadcasting: 3\nI0130 12:03:00.165081    1209 log.go:172] (0xc000884210) (0xc0002a4dc0) Stream removed, broadcasting: 5\n"
Jan 30 12:03:00.172: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 12:03:00.172: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 12:03:00.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:03:00.721: INFO: stderr: "I0130 12:03:00.336579    1230 log.go:172] (0xc000138840) (0xc00075e5a0) Create stream\nI0130 12:03:00.336802    1230 log.go:172] (0xc000138840) (0xc00075e5a0) Stream added, broadcasting: 1\nI0130 12:03:00.344810    1230 log.go:172] (0xc000138840) Reply frame received for 1\nI0130 12:03:00.344876    1230 log.go:172] (0xc000138840) (0xc000664c80) Create stream\nI0130 12:03:00.344887    1230 log.go:172] (0xc000138840) (0xc000664c80) Stream added, broadcasting: 3\nI0130 12:03:00.347285    1230 log.go:172] (0xc000138840) Reply frame received for 3\nI0130 12:03:00.347353    1230 log.go:172] (0xc000138840) (0xc000394000) Create stream\nI0130 12:03:00.347364    1230 log.go:172] (0xc000138840) (0xc000394000) Stream added, broadcasting: 5\nI0130 12:03:00.348464    1230 log.go:172] (0xc000138840) Reply frame received for 5\nI0130 12:03:00.479785    1230 log.go:172] (0xc000138840) Data frame received for 3\nI0130 12:03:00.479869    1230 log.go:172] (0xc000664c80) (3) Data frame handling\nI0130 12:03:00.479889    1230 log.go:172] (0xc000664c80) (3) Data frame sent\nI0130 12:03:00.479899    1230 log.go:172] (0xc000138840) Data frame received for 5\nI0130 12:03:00.479903    1230 log.go:172] (0xc000394000) (5) Data frame handling\nI0130 12:03:00.479911    1230 log.go:172] (0xc000394000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0130 12:03:00.700418    1230 log.go:172] (0xc000138840) Data frame received for 1\nI0130 12:03:00.700508    1230 log.go:172] (0xc00075e5a0) (1) Data frame handling\nI0130 12:03:00.700559    1230 log.go:172] (0xc00075e5a0) (1) Data frame sent\nI0130 12:03:00.703278    1230 log.go:172] (0xc000138840) (0xc00075e5a0) Stream removed, broadcasting: 1\nI0130 12:03:00.708758    1230 log.go:172] (0xc000138840) (0xc000664c80) Stream removed, broadcasting: 3\nI0130 12:03:00.712779    1230 log.go:172] (0xc000138840) (0xc000394000) Stream removed, broadcasting: 5\nI0130 12:03:00.712857    1230 log.go:172] (0xc000138840) (0xc00075e5a0) Stream removed, broadcasting: 1\nI0130 12:03:00.712876    1230 log.go:172] (0xc000138840) (0xc000664c80) Stream removed, broadcasting: 3\nI0130 12:03:00.712886    1230 log.go:172] (0xc000138840) (0xc000394000) Stream removed, broadcasting: 5\n"
Jan 30 12:03:00.722: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 12:03:00.722: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 12:03:00.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:03:01.187: INFO: stderr: "I0130 12:03:00.898398    1252 log.go:172] (0xc00072c2c0) (0xc0000232c0) Create stream\nI0130 12:03:00.898539    1252 log.go:172] (0xc00072c2c0) (0xc0000232c0) Stream added, broadcasting: 1\nI0130 12:03:00.902572    1252 log.go:172] (0xc00072c2c0) Reply frame received for 1\nI0130 12:03:00.902600    1252 log.go:172] (0xc00072c2c0) (0xc000768000) Create stream\nI0130 12:03:00.902631    1252 log.go:172] (0xc00072c2c0) (0xc000768000) Stream added, broadcasting: 3\nI0130 12:03:00.903444    1252 log.go:172] (0xc00072c2c0) Reply frame received for 3\nI0130 12:03:00.903481    1252 log.go:172] (0xc00072c2c0) (0xc000214000) Create stream\nI0130 12:03:00.903510    1252 log.go:172] (0xc00072c2c0) (0xc000214000) Stream added, broadcasting: 5\nI0130 12:03:00.904264    1252 log.go:172] (0xc00072c2c0) Reply frame received for 5\nI0130 12:03:01.015699    1252 log.go:172] (0xc00072c2c0) Data frame received for 3\nI0130 12:03:01.015771    1252 log.go:172] (0xc000768000) (3) Data frame handling\nI0130 12:03:01.015789    1252 log.go:172] (0xc000768000) (3) Data frame sent\nI0130 12:03:01.015833    1252 log.go:172] (0xc00072c2c0) Data frame received for 5\nI0130 12:03:01.015854    1252 log.go:172] (0xc000214000) (5) Data frame handling\nI0130 12:03:01.015872    1252 log.go:172] (0xc000214000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0130 12:03:01.172623    1252 log.go:172] (0xc00072c2c0) Data frame received for 1\nI0130 12:03:01.172717    1252 log.go:172] (0xc0000232c0) (1) Data frame handling\nI0130 12:03:01.172748    1252 log.go:172] (0xc0000232c0) (1) Data frame sent\nI0130 12:03:01.173492    1252 log.go:172] (0xc00072c2c0) (0xc000768000) Stream removed, broadcasting: 3\nI0130 12:03:01.173812    1252 log.go:172] (0xc00072c2c0) (0xc0000232c0) Stream removed, broadcasting: 1\nI0130 12:03:01.173922    1252 log.go:172] (0xc00072c2c0) (0xc000214000) Stream removed, broadcasting: 5\nI0130 12:03:01.174737    1252 log.go:172] (0xc00072c2c0) (0xc0000232c0) Stream removed, broadcasting: 1\nI0130 12:03:01.174788    1252 log.go:172] (0xc00072c2c0) (0xc000768000) Stream removed, broadcasting: 3\nI0130 12:03:01.174808    1252 log.go:172] (0xc00072c2c0) (0xc000214000) Stream removed, broadcasting: 5\nI0130 12:03:01.175005    1252 log.go:172] (0xc00072c2c0) Go away received\n"
Jan 30 12:03:01.187: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 12:03:01.187: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 12:03:01.205: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:03:01.205: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false
Jan 30 12:03:11.231: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:03:11.231: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:03:11.231: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 30 12:03:11.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 12:03:11.866: INFO: stderr: "I0130 12:03:11.490436    1274 log.go:172] (0xc00071a370) (0xc00073c640) Create stream\nI0130 12:03:11.490780    1274 log.go:172] (0xc00071a370) (0xc00073c640) Stream added, broadcasting: 1\nI0130 12:03:11.498886    1274 log.go:172] (0xc00071a370) Reply frame received for 1\nI0130 12:03:11.498976    1274 log.go:172] (0xc00071a370) (0xc000650c80) Create stream\nI0130 12:03:11.498998    1274 log.go:172] (0xc00071a370) (0xc000650c80) Stream added, broadcasting: 3\nI0130 12:03:11.500805    1274 log.go:172] (0xc00071a370) Reply frame received for 3\nI0130 12:03:11.500852    1274 log.go:172] (0xc00071a370) (0xc000790000) Create stream\nI0130 12:03:11.500869    1274 log.go:172] (0xc00071a370) (0xc000790000) Stream added, broadcasting: 5\nI0130 12:03:11.502812    1274 log.go:172] (0xc00071a370) Reply frame received for 5\nI0130 12:03:11.665517    1274 log.go:172] (0xc00071a370) Data frame received for 3\nI0130 12:03:11.665651    1274 log.go:172] (0xc000650c80) (3) Data frame handling\nI0130 12:03:11.665675    1274 log.go:172] (0xc000650c80) (3) Data frame sent\nI0130 12:03:11.838662    1274 log.go:172] (0xc00071a370) Data frame received for 1\nI0130 12:03:11.838894    1274 log.go:172] (0xc00071a370) (0xc000650c80) Stream removed, broadcasting: 3\nI0130 12:03:11.839015    1274 log.go:172] (0xc00073c640) (1) Data frame handling\nI0130 12:03:11.839041    1274 log.go:172] (0xc00073c640) (1) Data frame sent\nI0130 12:03:11.839061    1274 log.go:172] (0xc00071a370) (0xc00073c640) Stream removed, broadcasting: 1\nI0130 12:03:11.840080    1274 log.go:172] (0xc00071a370) (0xc000790000) Stream removed, broadcasting: 5\nI0130 12:03:11.840150    1274 log.go:172] (0xc00071a370) (0xc00073c640) Stream removed, broadcasting: 1\nI0130 12:03:11.840165    1274 log.go:172] (0xc00071a370) (0xc000650c80) Stream removed, broadcasting: 3\nI0130 12:03:11.840176    1274 log.go:172] (0xc00071a370) (0xc000790000) Stream removed, broadcasting: 5\nI0130 12:03:11.840337    1274 log.go:172] (0xc00071a370) Go away received\n"
Jan 30 12:03:11.866: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 12:03:11.866: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 12:03:11.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 12:03:12.780: INFO: stderr: "I0130 12:03:12.289837    1296 log.go:172] (0xc00019e630) (0xc000752780) Create stream\nI0130 12:03:12.290289    1296 log.go:172] (0xc00019e630) (0xc000752780) Stream added, broadcasting: 1\nI0130 12:03:12.296159    1296 log.go:172] (0xc00019e630) Reply frame received for 1\nI0130 12:03:12.296199    1296 log.go:172] (0xc00019e630) (0xc0004f03c0) Create stream\nI0130 12:03:12.296208    1296 log.go:172] (0xc00019e630) (0xc0004f03c0) Stream added, broadcasting: 3\nI0130 12:03:12.296935    1296 log.go:172] (0xc00019e630) Reply frame received for 3\nI0130 12:03:12.296965    1296 log.go:172] (0xc00019e630) (0xc0004f0500) Create stream\nI0130 12:03:12.296978    1296 log.go:172] (0xc00019e630) (0xc0004f0500) Stream added, broadcasting: 5\nI0130 12:03:12.298038    1296 log.go:172] (0xc00019e630) Reply frame received for 5\nI0130 12:03:12.458024    1296 log.go:172] (0xc00019e630) Data frame received for 3\nI0130 12:03:12.458191    1296 log.go:172] (0xc0004f03c0) (3) Data frame handling\nI0130 12:03:12.458231    1296 log.go:172] (0xc0004f03c0) (3) Data frame sent\nI0130 12:03:12.767727    1296 log.go:172] (0xc00019e630) Data frame received for 1\nI0130 12:03:12.767835    1296 log.go:172] (0xc000752780) (1) Data frame handling\nI0130 12:03:12.767864    1296 log.go:172] (0xc000752780) (1) Data frame sent\nI0130 12:03:12.767890    1296 log.go:172] (0xc00019e630) (0xc000752780) Stream removed, broadcasting: 1\nI0130 12:03:12.769730    1296 log.go:172] (0xc00019e630) (0xc0004f0500) Stream removed, broadcasting: 5\nI0130 12:03:12.769870    1296 log.go:172] (0xc00019e630) (0xc0004f03c0) Stream removed, broadcasting: 3\nI0130 12:03:12.769984    1296 log.go:172] (0xc00019e630) (0xc000752780) Stream removed, broadcasting: 1\nI0130 12:03:12.769999    1296 log.go:172] (0xc00019e630) (0xc0004f03c0) Stream removed, broadcasting: 3\nI0130 12:03:12.770009    1296 log.go:172] (0xc00019e630) (0xc0004f0500) Stream removed, broadcasting: 5\nI0130 12:03:12.770137    1296 log.go:172] (0xc00019e630) Go away received\n"
Jan 30 12:03:12.780: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 12:03:12.780: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 12:03:12.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 12:03:13.179: INFO: stderr: "I0130 12:03:12.974836    1319 log.go:172] (0xc0006e62c0) (0xc00066d2c0) Create stream\nI0130 12:03:12.975038    1319 log.go:172] (0xc0006e62c0) (0xc00066d2c0) Stream added, broadcasting: 1\nI0130 12:03:12.979353    1319 log.go:172] (0xc0006e62c0) Reply frame received for 1\nI0130 12:03:12.979394    1319 log.go:172] (0xc0006e62c0) (0xc00066d360) Create stream\nI0130 12:03:12.979406    1319 log.go:172] (0xc0006e62c0) (0xc00066d360) Stream added, broadcasting: 3\nI0130 12:03:12.980561    1319 log.go:172] (0xc0006e62c0) Reply frame received for 3\nI0130 12:03:12.980583    1319 log.go:172] (0xc0006e62c0) (0xc0006e4000) Create stream\nI0130 12:03:12.980594    1319 log.go:172] (0xc0006e62c0) (0xc0006e4000) Stream added, broadcasting: 5\nI0130 12:03:12.981570    1319 log.go:172] (0xc0006e62c0) Reply frame received for 5\nI0130 12:03:13.080206    1319 log.go:172] (0xc0006e62c0) Data frame received for 3\nI0130 12:03:13.080281    1319 log.go:172] (0xc00066d360) (3) Data frame handling\nI0130 12:03:13.080308    1319 log.go:172] (0xc00066d360) (3) Data frame sent\nI0130 12:03:13.172316    1319 log.go:172] (0xc0006e62c0) (0xc00066d360) Stream removed, broadcasting: 3\nI0130 12:03:13.172519    1319 log.go:172] (0xc0006e62c0) Data frame received for 1\nI0130 12:03:13.172541    1319 log.go:172] (0xc00066d2c0) (1) Data frame handling\nI0130 12:03:13.172567    1319 log.go:172] (0xc00066d2c0) (1) Data frame sent\nI0130 12:03:13.172614    1319 log.go:172] (0xc0006e62c0) (0xc00066d2c0) Stream removed, broadcasting: 1\nI0130 12:03:13.173079    1319 log.go:172] (0xc0006e62c0) (0xc0006e4000) Stream removed, broadcasting: 5\nI0130 12:03:13.173105    1319 log.go:172] (0xc0006e62c0) Go away received\nI0130 12:03:13.173536    1319 log.go:172] (0xc0006e62c0) (0xc00066d2c0) Stream removed, broadcasting: 1\nI0130 12:03:13.173559    1319 log.go:172] (0xc0006e62c0) (0xc00066d360) Stream removed, broadcasting: 3\nI0130 12:03:13.173567    1319 log.go:172] (0xc0006e62c0) (0xc0006e4000) Stream removed, broadcasting: 5\n"
Jan 30 12:03:13.179: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 12:03:13.179: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 12:03:13.179: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 12:03:13.195: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 30 12:03:23.219: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 12:03:23.219: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 12:03:23.219: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 12:03:23.252: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 30 12:03:23.252: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  }]
Jan 30 12:03:23.252: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:23.252: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:23.252: INFO: 
Jan 30 12:03:23.252: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 12:03:24.281: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 30 12:03:24.281: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  }]
Jan 30 12:03:24.282: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:24.282: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:24.282: INFO: 
Jan 30 12:03:24.282: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 12:03:25.753: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 30 12:03:25.754: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  }]
Jan 30 12:03:25.754: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:25.754: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:25.754: INFO: 
Jan 30 12:03:25.754: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 12:03:27.193: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 30 12:03:27.193: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  }]
Jan 30 12:03:27.193: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:27.193: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:27.193: INFO: 
Jan 30 12:03:27.193: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 12:03:28.646: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 30 12:03:28.646: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  }]
Jan 30 12:03:28.647: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:28.647: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:28.647: INFO: 
Jan 30 12:03:28.647: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 12:03:31.082: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 30 12:03:31.082: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  }]
Jan 30 12:03:31.083: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:31.083: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:31.083: INFO: 
Jan 30 12:03:31.083: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 12:03:32.104: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 30 12:03:32.104: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  }]
Jan 30 12:03:32.104: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:32.104: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:32.104: INFO: 
Jan 30 12:03:32.104: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 12:03:33.124: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 30 12:03:33.125: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:27 +0000 UTC  }]
Jan 30 12:03:33.125: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:33.125: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:03:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 12:02:48 +0000 UTC  }]
Jan 30 12:03:33.125: INFO: 
Jan 30 12:03:33.125: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-fzfqb
Jan 30 12:03:34.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:03:34.335: INFO: rc: 1
Jan 30 12:03:34.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0011a1c50 exit status 1   true [0xc000bb7a68 0xc000bb7ab0 0xc000bb7ac8] [0xc000bb7a68 0xc000bb7ab0 0xc000bb7ac8] [0xc000bb7aa8 0xc000bb7ac0] [0x935700 0x935700] 0xc000db6780 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan 30 12:03:44.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:03:44.495: INFO: rc: 1
Jan 30 12:03:44.496: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011a1da0 exit status 1   true [0xc000bb7ad0 0xc000bb7ae8 0xc000bb7b00] [0xc000bb7ad0 0xc000bb7ae8 0xc000bb7b00] [0xc000bb7ae0 0xc000bb7af8] [0x935700 0x935700] 0xc000db72c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:03:54.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:03:54.613: INFO: rc: 1
Jan 30 12:03:54.613: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011c7c80 exit status 1   true [0xc0004cfb00 0xc0004cfb40 0xc0004cfb58] [0xc0004cfb00 0xc0004cfb40 0xc0004cfb58] [0xc0004cfb28 0xc0004cfb50] [0x935700 0x935700] 0xc0010274a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:04:04.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:04:04.770: INFO: rc: 1
Jan 30 12:04:04.771: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000b246c0 exit status 1   true [0xc0006c1aa8 0xc0006c1ad0 0xc0006c1b00] [0xc0006c1aa8 0xc0006c1ad0 0xc0006c1b00] [0xc0006c1ab8 0xc0006c1af8] [0x935700 0x935700] 0xc000bfe540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:04:14.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:04:14.964: INFO: rc: 1
Jan 30 12:04:14.965: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00236e2d0 exit status 1   true [0xc00000e160 0xc000abc000 0xc000abc030] [0xc00000e160 0xc000abc000 0xc000abc030] [0xc00016e000 0xc000abc010] [0x935700 0x935700] 0xc0016b22a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:04:24.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:04:25.121: INFO: rc: 1
Jan 30 12:04:25.121: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000451560 exit status 1   true [0xc0015aa000 0xc0015aa018 0xc0015aa030] [0xc0015aa000 0xc0015aa018 0xc0015aa030] [0xc0015aa010 0xc0015aa028] [0x935700 0x935700] 0xc000fde1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:04:35.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:04:35.272: INFO: rc: 1
Jan 30 12:04:35.273: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000350e40 exit status 1   true [0xc0020dc000 0xc0020dc018 0xc0020dc030] [0xc0020dc000 0xc0020dc018 0xc0020dc030] [0xc0020dc010 0xc0020dc028] [0x935700 0x935700] 0xc000fe6000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:04:45.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:04:45.465: INFO: rc: 1
Jan 30 12:04:45.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ba61b0 exit status 1   true [0xc001ed0000 0xc001ed0018 0xc001ed0030] [0xc001ed0000 0xc001ed0018 0xc001ed0030] [0xc001ed0010 0xc001ed0028] [0x935700 0x935700] 0xc001b72360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:04:55.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:04:55.631: INFO: rc: 1
Jan 30 12:04:55.632: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ba6300 exit status 1   true [0xc001ed0038 0xc001ed0050 0xc001ed0068] [0xc001ed0038 0xc001ed0050 0xc001ed0068] [0xc001ed0048 0xc001ed0060] [0x935700 0x935700] 0xc001b72600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:05:05.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:05:05.812: INFO: rc: 1
Jan 30 12:05:05.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ba6510 exit status 1   true [0xc001ed0070 0xc001ed0088 0xc001ed00a0] [0xc001ed0070 0xc001ed0088 0xc001ed00a0] [0xc001ed0080 0xc001ed0098] [0x935700 0x935700] 0xc001b728a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:05:15.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:05:16.049: INFO: rc: 1
Jan 30 12:05:16.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000350fc0 exit status 1   true [0xc0020dc038 0xc0020dc050 0xc0020dc068] [0xc0020dc038 0xc0020dc050 0xc0020dc068] [0xc0020dc048 0xc0020dc060] [0x935700 0x935700] 0xc000fe62a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:05:26.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:05:26.227: INFO: rc: 1
Jan 30 12:05:26.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004516b0 exit status 1   true [0xc0015aa038 0xc0015aa050 0xc0015aa068] [0xc0015aa038 0xc0015aa050 0xc0015aa068] [0xc0015aa048 0xc0015aa060] [0x935700 0x935700] 0xc000fde480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:05:36.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:05:36.374: INFO: rc: 1
Jan 30 12:05:36.375: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000451800 exit status 1   true [0xc0015aa070 0xc0015aa088 0xc0015aa0a0] [0xc0015aa070 0xc0015aa088 0xc0015aa0a0] [0xc0015aa080 0xc0015aa098] [0x935700 0x935700] 0xc000fde7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:05:46.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:05:46.589: INFO: rc: 1
Jan 30 12:05:46.590: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000451920 exit status 1   true [0xc0015aa0a8 0xc0015aa0c0 0xc0015aa0d8] [0xc0015aa0a8 0xc0015aa0c0 0xc0015aa0d8] [0xc0015aa0b8 0xc0015aa0d0] [0x935700 0x935700] 0xc000fdea80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:05:56.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:05:56.744: INFO: rc: 1
Jan 30 12:05:56.744: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ba6780 exit status 1   true [0xc001ed00a8 0xc001ed00c0 0xc001ed00d8] [0xc001ed00a8 0xc001ed00c0 0xc001ed00d8] [0xc001ed00b8 0xc001ed00d0] [0x935700 0x935700] 0xc001b72b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:06:06.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:06:06.919: INFO: rc: 1
Jan 30 12:06:06.920: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ba6930 exit status 1   true [0xc001ed00e0 0xc001ed00f8 0xc001ed0110] [0xc001ed00e0 0xc001ed00f8 0xc001ed0110] [0xc001ed00f0 0xc001ed0108] [0x935700 0x935700] 0xc001b72de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:06:16.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:06:17.148: INFO: rc: 1
Jan 30 12:06:17.149: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00236e300 exit status 1   true [0xc00000e140 0xc000abc000 0xc000abc030] [0xc00000e140 0xc000abc000 0xc000abc030] [0xc00000e198 0xc000abc010] [0x935700 0x935700] 0xc0016b2000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:06:27.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:06:27.315: INFO: rc: 1
Jan 30 12:06:27.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ba61e0 exit status 1   true [0xc0015aa000 0xc0015aa018 0xc0015aa030] [0xc0015aa000 0xc0015aa018 0xc0015aa030] [0xc0015aa010 0xc0015aa028] [0x935700 0x935700] 0xc001b72360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:06:37.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:06:37.494: INFO: rc: 1
Jan 30 12:06:37.495: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00236e510 exit status 1   true [0xc000abc048 0xc000abc098 0xc000abc0d8] [0xc000abc048 0xc000abc098 0xc000abc0d8] [0xc000abc080 0xc000abc0d0] [0x935700 0x935700] 0xc0016b2360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:06:47.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:06:47.673: INFO: rc: 1
Jan 30 12:06:47.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000350e70 exit status 1   true [0xc001ed0000 0xc001ed0018 0xc001ed0030] [0xc001ed0000 0xc001ed0018 0xc001ed0030] [0xc001ed0010 0xc001ed0028] [0x935700 0x935700] 0xc000fe61e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:06:57.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:06:57.902: INFO: rc: 1
Jan 30 12:06:57.902: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000350ff0 exit status 1   true [0xc001ed0038 0xc001ed0050 0xc001ed0068] [0xc001ed0038 0xc001ed0050 0xc001ed0068] [0xc001ed0048 0xc001ed0060] [0x935700 0x935700] 0xc000fe6480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:07:07.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:07:08.079: INFO: rc: 1
Jan 30 12:07:08.080: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ba6330 exit status 1   true [0xc0015aa038 0xc0015aa050 0xc0015aa068] [0xc0015aa038 0xc0015aa050 0xc0015aa068] [0xc0015aa048 0xc0015aa060] [0x935700 0x935700] 0xc001b72600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:07:18.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:07:18.286: INFO: rc: 1
Jan 30 12:07:18.287: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ba6660 exit status 1   true [0xc0015aa070 0xc0015aa088 0xc0015aa0a0] [0xc0015aa070 0xc0015aa088 0xc0015aa0a0] [0xc0015aa080 0xc0015aa098] [0x935700 0x935700] 0xc001b728a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:07:28.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:07:28.785: INFO: rc: 1
Jan 30 12:07:28.786: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000451590 exit status 1   true [0xc0020dc000 0xc0020dc018 0xc0020dc030] [0xc0020dc000 0xc0020dc018 0xc0020dc030] [0xc0020dc010 0xc0020dc028] [0x935700 0x935700] 0xc000fde1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:07:38.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:07:39.009: INFO: rc: 1
Jan 30 12:07:39.010: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004516e0 exit status 1   true [0xc0020dc038 0xc0020dc050 0xc0020dc068] [0xc0020dc038 0xc0020dc050 0xc0020dc068] [0xc0020dc048 0xc0020dc060] [0x935700 0x935700] 0xc000fde480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:07:49.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:07:49.169: INFO: rc: 1
Jan 30 12:07:49.169: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ba68d0 exit status 1   true [0xc0015aa0a8 0xc0015aa0c0 0xc0015aa0d8] [0xc0015aa0a8 0xc0015aa0c0 0xc0015aa0d8] [0xc0015aa0b8 0xc0015aa0d0] [0x935700 0x935700] 0xc001b72b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:07:59.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:07:59.336: INFO: rc: 1
Jan 30 12:07:59.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ba6a50 exit status 1   true [0xc0015aa0e0 0xc0015aa0f8 0xc0015aa110] [0xc0015aa0e0 0xc0015aa0f8 0xc0015aa110] [0xc0015aa0f0 0xc0015aa108] [0x935700 0x935700] 0xc001b72de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:08:09.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:08:09.524: INFO: rc: 1
Jan 30 12:08:09.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000351170 exit status 1   true [0xc001ed0070 0xc001ed0088 0xc001ed00a0] [0xc001ed0070 0xc001ed0088 0xc001ed00a0] [0xc001ed0080 0xc001ed0098] [0x935700 0x935700] 0xc000fe6720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:08:19.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:08:19.742: INFO: rc: 1
Jan 30 12:08:19.743: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00236e2d0 exit status 1   true [0xc00000e160 0xc000abc000 0xc000abc030] [0xc00000e160 0xc000abc000 0xc000abc030] [0xc00016e000 0xc000abc010] [0x935700 0x935700] 0xc0016b2000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:08:29.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:08:29.913: INFO: rc: 1
Jan 30 12:08:29.914: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000451560 exit status 1   true [0xc0020dc000 0xc0020dc018 0xc0020dc030] [0xc0020dc000 0xc0020dc018 0xc0020dc030] [0xc0020dc010 0xc0020dc028] [0x935700 0x935700] 0xc000fde1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 30 12:08:39.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-fzfqb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:08:40.076: INFO: rc: 1
Jan 30 12:08:40.077: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 30 12:08:40.077: INFO: Scaling statefulset ss to 0
Jan 30 12:08:40.118: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 30 12:08:40.126: INFO: Deleting all statefulset in ns e2e-tests-statefulset-fzfqb
Jan 30 12:08:40.132: INFO: Scaling statefulset ss to 0
Jan 30 12:08:40.146: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 12:08:40.151: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:08:40.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-fzfqb" for this suite.
Jan 30 12:08:48.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:08:48.663: INFO: namespace: e2e-tests-statefulset-fzfqb, resource: bindings, ignored listing per whitelist
Jan 30 12:08:48.917: INFO: namespace e2e-tests-statefulset-fzfqb deletion completed in 8.725793254s

• [SLOW TEST:381.778 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:08:48.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 30 12:08:49.170: INFO: Waiting up to 5m0s for pod "pod-45dca062-4359-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-j27x9" to be "success or failure"
Jan 30 12:08:49.174: INFO: Pod "pod-45dca062-4359-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.912779ms
Jan 30 12:08:51.227: INFO: Pod "pod-45dca062-4359-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057625624s
Jan 30 12:08:53.241: INFO: Pod "pod-45dca062-4359-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071487042s
Jan 30 12:08:55.288: INFO: Pod "pod-45dca062-4359-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118438865s
Jan 30 12:08:57.421: INFO: Pod "pod-45dca062-4359-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251402386s
Jan 30 12:08:59.445: INFO: Pod "pod-45dca062-4359-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.275230555s
STEP: Saw pod success
Jan 30 12:08:59.445: INFO: Pod "pod-45dca062-4359-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:08:59.451: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-45dca062-4359-11ea-a47a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 30 12:08:59.512: INFO: Waiting for pod pod-45dca062-4359-11ea-a47a-0242ac110005 to disappear
Jan 30 12:08:59.580: INFO: Pod pod-45dca062-4359-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:08:59.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-j27x9" for this suite.
Jan 30 12:09:05.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:09:05.770: INFO: namespace: e2e-tests-emptydir-j27x9, resource: bindings, ignored listing per whitelist
Jan 30 12:09:05.833: INFO: namespace e2e-tests-emptydir-j27x9 deletion completed in 6.240469508s

• [SLOW TEST:16.915 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:09:05.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 30 12:09:06.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-nnr7v" to be "success or failure"
Jan 30 12:09:06.074: INFO: Pod "downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.273939ms
Jan 30 12:09:08.090: INFO: Pod "downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025780819s
Jan 30 12:09:10.108: INFO: Pod "downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043327048s
Jan 30 12:09:12.215: INFO: Pod "downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151039522s
Jan 30 12:09:14.248: INFO: Pod "downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183959421s
Jan 30 12:09:16.281: INFO: Pod "downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.216799548s
STEP: Saw pod success
Jan 30 12:09:16.281: INFO: Pod "downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:09:16.287: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 30 12:09:16.449: INFO: Waiting for pod downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005 to disappear
Jan 30 12:09:16.464: INFO: Pod downwardapi-volume-4feeeda7-4359-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:09:16.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nnr7v" for this suite.
Jan 30 12:09:22.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:09:22.654: INFO: namespace: e2e-tests-projected-nnr7v, resource: bindings, ignored listing per whitelist
Jan 30 12:09:22.738: INFO: namespace e2e-tests-projected-nnr7v deletion completed in 6.256560805s

• [SLOW TEST:16.905 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:09:22.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-zm5k7
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan 30 12:09:22.957: INFO: Found 0 stateful pods, waiting for 3
Jan 30 12:09:32.978: INFO: Found 2 stateful pods, waiting for 3
Jan 30 12:09:42.978: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:09:42.978: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:09:42.978: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 30 12:09:52.985: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:09:52.985: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:09:52.985: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:09:53.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zm5k7 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 12:09:53.683: INFO: stderr: "I0130 12:09:53.231903    1964 log.go:172] (0xc0007c84d0) (0xc0005c12c0) Create stream\nI0130 12:09:53.232422    1964 log.go:172] (0xc0007c84d0) (0xc0005c12c0) Stream added, broadcasting: 1\nI0130 12:09:53.278567    1964 log.go:172] (0xc0007c84d0) Reply frame received for 1\nI0130 12:09:53.278694    1964 log.go:172] (0xc0007c84d0) (0xc0005c1360) Create stream\nI0130 12:09:53.278707    1964 log.go:172] (0xc0007c84d0) (0xc0005c1360) Stream added, broadcasting: 3\nI0130 12:09:53.283931    1964 log.go:172] (0xc0007c84d0) Reply frame received for 3\nI0130 12:09:53.283968    1964 log.go:172] (0xc0007c84d0) (0xc000752000) Create stream\nI0130 12:09:53.283985    1964 log.go:172] (0xc0007c84d0) (0xc000752000) Stream added, broadcasting: 5\nI0130 12:09:53.285510    1964 log.go:172] (0xc0007c84d0) Reply frame received for 5\nI0130 12:09:53.515184    1964 log.go:172] (0xc0007c84d0) Data frame received for 3\nI0130 12:09:53.515269    1964 log.go:172] (0xc0005c1360) (3) Data frame handling\nI0130 12:09:53.515306    1964 log.go:172] (0xc0005c1360) (3) Data frame sent\nI0130 12:09:53.668661    1964 log.go:172] (0xc0007c84d0) (0xc0005c1360) Stream removed, broadcasting: 3\nI0130 12:09:53.669191    1964 log.go:172] (0xc0007c84d0) Data frame received for 1\nI0130 12:09:53.669210    1964 log.go:172] (0xc0005c12c0) (1) Data frame handling\nI0130 12:09:53.669232    1964 log.go:172] (0xc0005c12c0) (1) Data frame sent\nI0130 12:09:53.669245    1964 log.go:172] (0xc0007c84d0) (0xc0005c12c0) Stream removed, broadcasting: 1\nI0130 12:09:53.669661    1964 log.go:172] (0xc0007c84d0) (0xc000752000) Stream removed, broadcasting: 5\nI0130 12:09:53.669712    1964 log.go:172] (0xc0007c84d0) (0xc0005c12c0) Stream removed, broadcasting: 1\nI0130 12:09:53.669720    1964 log.go:172] (0xc0007c84d0) (0xc0005c1360) Stream removed, broadcasting: 3\nI0130 12:09:53.669731    1964 log.go:172] (0xc0007c84d0) (0xc000752000) Stream removed, broadcasting: 5\n"
Jan 30 12:09:53.684: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 12:09:53.684: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 30 12:10:03.779: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 30 12:10:13.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zm5k7 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:10:14.737: INFO: stderr: "I0130 12:10:14.270635    1987 log.go:172] (0xc0008902c0) (0xc00064f4a0) Create stream\nI0130 12:10:14.271178    1987 log.go:172] (0xc0008902c0) (0xc00064f4a0) Stream added, broadcasting: 1\nI0130 12:10:14.281375    1987 log.go:172] (0xc0008902c0) Reply frame received for 1\nI0130 12:10:14.281468    1987 log.go:172] (0xc0008902c0) (0xc00064f540) Create stream\nI0130 12:10:14.281484    1987 log.go:172] (0xc0008902c0) (0xc00064f540) Stream added, broadcasting: 3\nI0130 12:10:14.284940    1987 log.go:172] (0xc0008902c0) Reply frame received for 3\nI0130 12:10:14.285006    1987 log.go:172] (0xc0008902c0) (0xc0007d8dc0) Create stream\nI0130 12:10:14.285028    1987 log.go:172] (0xc0008902c0) (0xc0007d8dc0) Stream added, broadcasting: 5\nI0130 12:10:14.289357    1987 log.go:172] (0xc0008902c0) Reply frame received for 5\nI0130 12:10:14.568514    1987 log.go:172] (0xc0008902c0) Data frame received for 3\nI0130 12:10:14.568807    1987 log.go:172] (0xc00064f540) (3) Data frame handling\nI0130 12:10:14.568918    1987 log.go:172] (0xc00064f540) (3) Data frame sent\nI0130 12:10:14.722503    1987 log.go:172] (0xc0008902c0) Data frame received for 1\nI0130 12:10:14.723849    1987 log.go:172] (0xc0008902c0) (0xc0007d8dc0) Stream removed, broadcasting: 5\nI0130 12:10:14.724100    1987 log.go:172] (0xc00064f4a0) (1) Data frame handling\nI0130 12:10:14.724181    1987 log.go:172] (0xc00064f4a0) (1) Data frame sent\nI0130 12:10:14.724244    1987 log.go:172] (0xc0008902c0) (0xc00064f540) Stream removed, broadcasting: 3\nI0130 12:10:14.724323    1987 log.go:172] (0xc0008902c0) (0xc00064f4a0) Stream removed, broadcasting: 1\nI0130 12:10:14.724370    1987 log.go:172] (0xc0008902c0) Go away received\nI0130 12:10:14.725210    1987 log.go:172] (0xc0008902c0) (0xc00064f4a0) Stream removed, broadcasting: 1\nI0130 12:10:14.725237    1987 log.go:172] (0xc0008902c0) (0xc00064f540) Stream removed, broadcasting: 3\nI0130 12:10:14.725250    1987 log.go:172] (0xc0008902c0) (0xc0007d8dc0) Stream removed, broadcasting: 5\n"
Jan 30 12:10:14.738: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 12:10:14.738: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 12:10:24.834: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
Jan 30 12:10:24.834: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 12:10:24.834: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 12:10:24.834: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 12:10:34.856: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
Jan 30 12:10:34.856: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 12:10:34.856: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 12:10:44.868: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
Jan 30 12:10:44.869: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 12:10:44.869: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 12:10:54.964: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
Jan 30 12:10:54.964: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 12:11:04.857: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
Jan 30 12:11:04.857: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 12:11:14.863: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 30 12:11:24.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zm5k7 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 12:11:25.526: INFO: stderr: "I0130 12:11:25.120983    2009 log.go:172] (0xc0006fa370) (0xc0005d52c0) Create stream\nI0130 12:11:25.121771    2009 log.go:172] (0xc0006fa370) (0xc0005d52c0) Stream added, broadcasting: 1\nI0130 12:11:25.138410    2009 log.go:172] (0xc0006fa370) Reply frame received for 1\nI0130 12:11:25.138656    2009 log.go:172] (0xc0006fa370) (0xc000738000) Create stream\nI0130 12:11:25.138731    2009 log.go:172] (0xc0006fa370) (0xc000738000) Stream added, broadcasting: 3\nI0130 12:11:25.140846    2009 log.go:172] (0xc0006fa370) Reply frame received for 3\nI0130 12:11:25.140898    2009 log.go:172] (0xc0006fa370) (0xc000546000) Create stream\nI0130 12:11:25.140922    2009 log.go:172] (0xc0006fa370) (0xc000546000) Stream added, broadcasting: 5\nI0130 12:11:25.143905    2009 log.go:172] (0xc0006fa370) Reply frame received for 5\nI0130 12:11:25.344829    2009 log.go:172] (0xc0006fa370) Data frame received for 3\nI0130 12:11:25.344948    2009 log.go:172] (0xc000738000) (3) Data frame handling\nI0130 12:11:25.344989    2009 log.go:172] (0xc000738000) (3) Data frame sent\nI0130 12:11:25.512110    2009 log.go:172] (0xc0006fa370) (0xc000546000) Stream removed, broadcasting: 5\nI0130 12:11:25.512348    2009 log.go:172] (0xc0006fa370) Data frame received for 1\nI0130 12:11:25.512580    2009 log.go:172] (0xc0006fa370) (0xc000738000) Stream removed, broadcasting: 3\nI0130 12:11:25.512668    2009 log.go:172] (0xc0005d52c0) (1) Data frame handling\nI0130 12:11:25.512697    2009 log.go:172] (0xc0005d52c0) (1) Data frame sent\nI0130 12:11:25.512713    2009 log.go:172] (0xc0006fa370) (0xc0005d52c0) Stream removed, broadcasting: 1\nI0130 12:11:25.512743    2009 log.go:172] (0xc0006fa370) Go away received\nI0130 12:11:25.514091    2009 log.go:172] (0xc0006fa370) (0xc0005d52c0) Stream removed, broadcasting: 1\nI0130 12:11:25.514113    2009 log.go:172] (0xc0006fa370) (0xc000738000) Stream removed, broadcasting: 3\nI0130 12:11:25.514128    2009 log.go:172] (0xc0006fa370) (0xc000546000) Stream removed, broadcasting: 5\n"
Jan 30 12:11:25.527: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 12:11:25.527: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 12:11:35.644: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 30 12:11:45.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zm5k7 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:11:46.224: INFO: stderr: "I0130 12:11:45.954322    2031 log.go:172] (0xc0007ce420) (0xc000681220) Create stream\nI0130 12:11:45.954651    2031 log.go:172] (0xc0007ce420) (0xc000681220) Stream added, broadcasting: 1\nI0130 12:11:45.959883    2031 log.go:172] (0xc0007ce420) Reply frame received for 1\nI0130 12:11:45.959922    2031 log.go:172] (0xc0007ce420) (0xc000718000) Create stream\nI0130 12:11:45.959933    2031 log.go:172] (0xc0007ce420) (0xc000718000) Stream added, broadcasting: 3\nI0130 12:11:45.960913    2031 log.go:172] (0xc0007ce420) Reply frame received for 3\nI0130 12:11:45.960948    2031 log.go:172] (0xc0007ce420) (0xc0007bc000) Create stream\nI0130 12:11:45.960956    2031 log.go:172] (0xc0007ce420) (0xc0007bc000) Stream added, broadcasting: 5\nI0130 12:11:45.962826    2031 log.go:172] (0xc0007ce420) Reply frame received for 5\nI0130 12:11:46.070023    2031 log.go:172] (0xc0007ce420) Data frame received for 3\nI0130 12:11:46.070449    2031 log.go:172] (0xc000718000) (3) Data frame handling\nI0130 12:11:46.070531    2031 log.go:172] (0xc000718000) (3) Data frame sent\nI0130 12:11:46.212263    2031 log.go:172] (0xc0007ce420) Data frame received for 1\nI0130 12:11:46.212451    2031 log.go:172] (0xc0007ce420) (0xc000718000) Stream removed, broadcasting: 3\nI0130 12:11:46.212546    2031 log.go:172] (0xc000681220) (1) Data frame handling\nI0130 12:11:46.212572    2031 log.go:172] (0xc000681220) (1) Data frame sent\nI0130 12:11:46.212592    2031 log.go:172] (0xc0007ce420) (0xc000681220) Stream removed, broadcasting: 1\nI0130 12:11:46.212646    2031 log.go:172] (0xc0007ce420) (0xc0007bc000) Stream removed, broadcasting: 5\nI0130 12:11:46.213384    2031 log.go:172] (0xc0007ce420) (0xc000681220) Stream removed, broadcasting: 1\nI0130 12:11:46.213419    2031 log.go:172] (0xc0007ce420) (0xc000718000) Stream removed, broadcasting: 3\nI0130 12:11:46.213430    2031 log.go:172] (0xc0007ce420) (0xc0007bc000) Stream removed, broadcasting: 5\n"
Jan 30 12:11:46.224: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 12:11:46.224: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 12:11:56.291: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
Jan 30 12:11:56.291: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 30 12:11:56.291: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 30 12:11:56.291: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 30 12:12:06.345: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
Jan 30 12:12:06.345: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 30 12:12:06.345: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 30 12:12:16.315: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
Jan 30 12:12:16.315: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 30 12:12:16.315: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 30 12:12:26.347: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
Jan 30 12:12:26.347: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 30 12:12:36.316: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
Jan 30 12:12:36.316: INFO: Waiting for Pod e2e-tests-statefulset-zm5k7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 30 12:12:46.326: INFO: Waiting for StatefulSet e2e-tests-statefulset-zm5k7/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 30 12:12:56.317: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zm5k7
Jan 30 12:12:56.322: INFO: Scaling statefulset ss2 to 0
Jan 30 12:13:36.368: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 12:13:36.377: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:13:36.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-zm5k7" for this suite.
Jan 30 12:13:46.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:13:46.787: INFO: namespace: e2e-tests-statefulset-zm5k7, resource: bindings, ignored listing per whitelist
Jan 30 12:13:46.816: INFO: namespace e2e-tests-statefulset-zm5k7 deletion completed in 10.267983767s

• [SLOW TEST:264.078 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:13:46.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 30 12:13:47.138: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:14:09.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-6cwfb" for this suite.
Jan 30 12:14:33.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:14:34.040: INFO: namespace: e2e-tests-init-container-6cwfb, resource: bindings, ignored listing per whitelist
Jan 30 12:14:34.201: INFO: namespace e2e-tests-init-container-6cwfb deletion completed in 24.475622765s

• [SLOW TEST:47.385 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:14:34.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-13aaf5e7-435a-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 30 12:14:34.477: INFO: Waiting up to 5m0s for pod "pod-secrets-13ad01c9-435a-11ea-a47a-0242ac110005" in namespace "e2e-tests-secrets-5h8wp" to be "success or failure"
Jan 30 12:14:34.499: INFO: Pod "pod-secrets-13ad01c9-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.999022ms
Jan 30 12:14:36.567: INFO: Pod "pod-secrets-13ad01c9-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089973547s
Jan 30 12:14:38.682: INFO: Pod "pod-secrets-13ad01c9-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20486994s
Jan 30 12:14:40.695: INFO: Pod "pod-secrets-13ad01c9-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.217758886s
Jan 30 12:14:42.732: INFO: Pod "pod-secrets-13ad01c9-435a-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.254831612s
STEP: Saw pod success
Jan 30 12:14:42.733: INFO: Pod "pod-secrets-13ad01c9-435a-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:14:42.773: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-13ad01c9-435a-11ea-a47a-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan 30 12:14:42.967: INFO: Waiting for pod pod-secrets-13ad01c9-435a-11ea-a47a-0242ac110005 to disappear
Jan 30 12:14:42.986: INFO: Pod pod-secrets-13ad01c9-435a-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:14:42.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5h8wp" for this suite.
Jan 30 12:14:49.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:14:49.241: INFO: namespace: e2e-tests-secrets-5h8wp, resource: bindings, ignored listing per whitelist
Jan 30 12:14:49.285: INFO: namespace e2e-tests-secrets-5h8wp deletion completed in 6.223498213s

• [SLOW TEST:15.082 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:14:49.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-wtlcv/configmap-test-1ca68eec-435a-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 30 12:14:49.532: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005" in namespace "e2e-tests-configmap-wtlcv" to be "success or failure"
Jan 30 12:14:49.549: INFO: Pod "pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.367178ms
Jan 30 12:14:51.597: INFO: Pod "pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064869965s
Jan 30 12:14:53.645: INFO: Pod "pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113358501s
Jan 30 12:14:55.679: INFO: Pod "pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146634153s
Jan 30 12:14:57.714: INFO: Pod "pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.182504346s
Jan 30 12:14:59.731: INFO: Pod "pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.199569467s
STEP: Saw pod success
Jan 30 12:14:59.732: INFO: Pod "pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:14:59.738: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005 container env-test: 
STEP: delete the pod
Jan 30 12:14:59.960: INFO: Waiting for pod pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005 to disappear
Jan 30 12:14:59.984: INFO: Pod pod-configmaps-1ca79b57-435a-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:14:59.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wtlcv" for this suite.
Jan 30 12:15:06.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:15:06.288: INFO: namespace: e2e-tests-configmap-wtlcv, resource: bindings, ignored listing per whitelist
Jan 30 12:15:06.320: INFO: namespace e2e-tests-configmap-wtlcv deletion completed in 6.328367169s

• [SLOW TEST:17.035 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:15:06.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-26e13ee9-435a-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 30 12:15:06.686: INFO: Waiting up to 5m0s for pod "pod-configmaps-26e294ef-435a-11ea-a47a-0242ac110005" in namespace "e2e-tests-configmap-8tvtc" to be "success or failure"
Jan 30 12:15:06.702: INFO: Pod "pod-configmaps-26e294ef-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.628887ms
Jan 30 12:15:08.719: INFO: Pod "pod-configmaps-26e294ef-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032896179s
Jan 30 12:15:11.054: INFO: Pod "pod-configmaps-26e294ef-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367685952s
Jan 30 12:15:13.084: INFO: Pod "pod-configmaps-26e294ef-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398429387s
Jan 30 12:15:15.105: INFO: Pod "pod-configmaps-26e294ef-435a-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.419006989s
STEP: Saw pod success
Jan 30 12:15:15.105: INFO: Pod "pod-configmaps-26e294ef-435a-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:15:15.118: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-26e294ef-435a-11ea-a47a-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 30 12:15:15.249: INFO: Waiting for pod pod-configmaps-26e294ef-435a-11ea-a47a-0242ac110005 to disappear
Jan 30 12:15:15.263: INFO: Pod pod-configmaps-26e294ef-435a-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:15:15.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-8tvtc" for this suite.
Jan 30 12:15:21.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:15:21.358: INFO: namespace: e2e-tests-configmap-8tvtc, resource: bindings, ignored listing per whitelist
Jan 30 12:15:21.535: INFO: namespace e2e-tests-configmap-8tvtc deletion completed in 6.257537069s

• [SLOW TEST:15.214 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:15:21.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 30 12:15:21.893: INFO: namespace e2e-tests-kubectl-qd94j
Jan 30 12:15:21.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qd94j'
Jan 30 12:15:24.471: INFO: stderr: ""
Jan 30 12:15:24.471: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 30 12:15:25.482: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 12:15:25.482: INFO: Found 0 / 1
Jan 30 12:15:26.503: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 12:15:26.503: INFO: Found 0 / 1
Jan 30 12:15:27.487: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 12:15:27.488: INFO: Found 0 / 1
Jan 30 12:15:28.508: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 12:15:28.508: INFO: Found 0 / 1
Jan 30 12:15:29.543: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 12:15:29.543: INFO: Found 0 / 1
Jan 30 12:15:30.500: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 12:15:30.501: INFO: Found 0 / 1
Jan 30 12:15:31.511: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 12:15:31.511: INFO: Found 0 / 1
Jan 30 12:15:32.492: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 12:15:32.492: INFO: Found 1 / 1
Jan 30 12:15:32.492: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 30 12:15:32.507: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 12:15:32.507: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 30 12:15:32.507: INFO: wait on redis-master startup in e2e-tests-kubectl-qd94j 
Jan 30 12:15:32.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zwd5z redis-master --namespace=e2e-tests-kubectl-qd94j'
Jan 30 12:15:32.775: INFO: stderr: ""
Jan 30 12:15:32.775: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 30 Jan 12:15:31.493 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Jan 12:15:31.493 # Server started, Redis version 3.2.12\n1:M 30 Jan 12:15:31.493 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Jan 12:15:31.493 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 30 12:15:32.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-qd94j'
Jan 30 12:15:32.999: INFO: stderr: ""
Jan 30 12:15:32.999: INFO: stdout: "service/rm2 exposed\n"
Jan 30 12:15:33.023: INFO: Service rm2 in namespace e2e-tests-kubectl-qd94j found.
STEP: exposing service
Jan 30 12:15:35.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-qd94j'
Jan 30 12:15:35.504: INFO: stderr: ""
Jan 30 12:15:35.504: INFO: stdout: "service/rm3 exposed\n"
Jan 30 12:15:35.535: INFO: Service rm3 in namespace e2e-tests-kubectl-qd94j found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:15:37.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qd94j" for this suite.
Jan 30 12:16:01.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:16:01.803: INFO: namespace: e2e-tests-kubectl-qd94j, resource: bindings, ignored listing per whitelist
Jan 30 12:16:01.819: INFO: namespace e2e-tests-kubectl-qd94j deletion completed in 24.216517277s

• [SLOW TEST:40.283 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:16:01.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 30 12:16:22.358: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 12:16:22.396: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 12:16:24.397: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 12:16:24.579: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 12:16:26.397: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 12:16:26.469: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 12:16:28.397: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 12:16:28.501: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 12:16:30.397: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 12:16:30.409: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 12:16:32.397: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 12:16:32.518: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 12:16:34.397: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 12:16:34.417: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:16:34.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-wzkf2" for this suite.
Jan 30 12:16:58.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:16:58.684: INFO: namespace: e2e-tests-container-lifecycle-hook-wzkf2, resource: bindings, ignored listing per whitelist
Jan 30 12:16:58.744: INFO: namespace e2e-tests-container-lifecycle-hook-wzkf2 deletion completed in 24.285255438s

• [SLOW TEST:56.923 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:16:58.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 30 12:16:58.983: INFO: Waiting up to 5m0s for pod "pod-69d186f1-435a-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-r77wh" to be "success or failure"
Jan 30 12:16:59.054: INFO: Pod "pod-69d186f1-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 71.098192ms
Jan 30 12:17:01.126: INFO: Pod "pod-69d186f1-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142865295s
Jan 30 12:17:03.148: INFO: Pod "pod-69d186f1-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164260448s
Jan 30 12:17:05.177: INFO: Pod "pod-69d186f1-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193679222s
Jan 30 12:17:07.190: INFO: Pod "pod-69d186f1-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206773473s
Jan 30 12:17:09.232: INFO: Pod "pod-69d186f1-435a-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.249005283s
STEP: Saw pod success
Jan 30 12:17:09.233: INFO: Pod "pod-69d186f1-435a-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:17:09.253: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-69d186f1-435a-11ea-a47a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 30 12:17:09.772: INFO: Waiting for pod pod-69d186f1-435a-11ea-a47a-0242ac110005 to disappear
Jan 30 12:17:10.057: INFO: Pod pod-69d186f1-435a-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:17:10.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-r77wh" for this suite.
Jan 30 12:17:16.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:17:16.463: INFO: namespace: e2e-tests-emptydir-r77wh, resource: bindings, ignored listing per whitelist
Jan 30 12:17:16.463: INFO: namespace e2e-tests-emptydir-r77wh deletion completed in 6.391915604s

• [SLOW TEST:17.719 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:17:16.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:18:14.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-dt9mx" for this suite.
Jan 30 12:18:22.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:18:22.986: INFO: namespace: e2e-tests-container-runtime-dt9mx, resource: bindings, ignored listing per whitelist
Jan 30 12:18:23.051: INFO: namespace e2e-tests-container-runtime-dt9mx deletion completed in 8.35281005s

• [SLOW TEST:66.587 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:18:23.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 30 12:18:23.237: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 30 12:18:23.253: INFO: Waiting for terminating namespaces to be deleted...
Jan 30 12:18:23.257: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 30 12:18:23.274: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 30 12:18:23.274: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 30 12:18:23.274: INFO: 	Container coredns ready: true, restart count 0
Jan 30 12:18:23.274: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 30 12:18:23.274: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 12:18:23.274: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 30 12:18:23.274: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 30 12:18:23.274: INFO: 	Container weave ready: true, restart count 0
Jan 30 12:18:23.274: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 12:18:23.274: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 30 12:18:23.274: INFO: 	Container coredns ready: true, restart count 0
Jan 30 12:18:23.274: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 30 12:18:23.274: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15eea9555add7eef], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:18:24.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-5kf8h" for this suite.
Jan 30 12:18:30.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:18:30.553: INFO: namespace: e2e-tests-sched-pred-5kf8h, resource: bindings, ignored listing per whitelist
Jan 30 12:18:30.698: INFO: namespace e2e-tests-sched-pred-5kf8h deletion completed in 6.355918452s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.647 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:18:30.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 30 12:18:30.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-pwfl7" to be "success or failure"
Jan 30 12:18:30.986: INFO: Pod "downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.364713ms
Jan 30 12:18:33.000: INFO: Pod "downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047246959s
Jan 30 12:18:35.026: INFO: Pod "downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073255004s
Jan 30 12:18:37.283: INFO: Pod "downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330331251s
Jan 30 12:18:39.309: INFO: Pod "downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356584609s
Jan 30 12:18:41.326: INFO: Pod "downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.37302324s
STEP: Saw pod success
Jan 30 12:18:41.326: INFO: Pod "downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:18:41.333: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 30 12:18:41.443: INFO: Waiting for pod downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005 to disappear
Jan 30 12:18:41.453: INFO: Pod downwardapi-volume-a0a296c1-435a-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:18:41.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pwfl7" for this suite.
Jan 30 12:18:47.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:18:47.681: INFO: namespace: e2e-tests-projected-pwfl7, resource: bindings, ignored listing per whitelist
Jan 30 12:18:47.791: INFO: namespace e2e-tests-projected-pwfl7 deletion completed in 6.326133048s

• [SLOW TEST:17.093 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:18:47.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 30 12:18:48.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-64j9x" to be "success or failure"
Jan 30 12:18:48.031: INFO: Pod "downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.20567ms
Jan 30 12:18:50.053: INFO: Pod "downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031778783s
Jan 30 12:18:52.071: INFO: Pod "downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050095685s
Jan 30 12:18:54.182: INFO: Pod "downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160249198s
Jan 30 12:18:56.263: INFO: Pod "downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241710292s
Jan 30 12:18:58.279: INFO: Pod "downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.257321963s
STEP: Saw pod success
Jan 30 12:18:58.279: INFO: Pod "downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:18:58.286: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 30 12:18:59.228: INFO: Waiting for pod downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005 to disappear
Jan 30 12:18:59.495: INFO: Pod downwardapi-volume-aacdabe5-435a-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:18:59.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-64j9x" for this suite.
Jan 30 12:19:05.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:19:05.645: INFO: namespace: e2e-tests-downward-api-64j9x, resource: bindings, ignored listing per whitelist
Jan 30 12:19:05.767: INFO: namespace e2e-tests-downward-api-64j9x deletion completed in 6.254311091s

• [SLOW TEST:17.975 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:19:05.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b5841db7-435a-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 30 12:19:06.076: INFO: Waiting up to 5m0s for pod "pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005" in namespace "e2e-tests-secrets-ccjkz" to be "success or failure"
Jan 30 12:19:06.100: INFO: Pod "pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.679226ms
Jan 30 12:19:08.123: INFO: Pod "pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045904344s
Jan 30 12:19:10.148: INFO: Pod "pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071746956s
Jan 30 12:19:12.483: INFO: Pod "pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406541899s
Jan 30 12:19:14.512: INFO: Pod "pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.435555991s
Jan 30 12:19:16.538: INFO: Pod "pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.461436359s
STEP: Saw pod success
Jan 30 12:19:16.539: INFO: Pod "pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:19:16.556: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 30 12:19:16.781: INFO: Waiting for pod pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005 to disappear
Jan 30 12:19:16.790: INFO: Pod pod-secrets-b5851a71-435a-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:19:16.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ccjkz" for this suite.
Jan 30 12:19:22.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:19:22.951: INFO: namespace: e2e-tests-secrets-ccjkz, resource: bindings, ignored listing per whitelist
Jan 30 12:19:22.997: INFO: namespace e2e-tests-secrets-ccjkz deletion completed in 6.198828589s

• [SLOW TEST:17.230 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:19:22.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 30 12:19:30.695: INFO: 10 pods remaining
Jan 30 12:19:30.695: INFO: 10 pods has nil DeletionTimestamp
Jan 30 12:19:30.695: INFO: 
Jan 30 12:19:31.126: INFO: 9 pods remaining
Jan 30 12:19:31.126: INFO: 1 pods has nil DeletionTimestamp
Jan 30 12:19:31.126: INFO: 
Jan 30 12:19:31.964: INFO: 0 pods remaining
Jan 30 12:19:31.964: INFO: 0 pods has nil DeletionTimestamp
Jan 30 12:19:31.964: INFO: 
STEP: Gathering metrics
W0130 12:19:32.987477       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 12:19:32.987: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:19:32.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-7qrcz" for this suite.
Jan 30 12:19:47.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:19:47.505: INFO: namespace: e2e-tests-gc-7qrcz, resource: bindings, ignored listing per whitelist
Jan 30 12:19:47.533: INFO: namespace e2e-tests-gc-7qrcz deletion completed in 14.541850735s

• [SLOW TEST:24.535 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:19:47.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 30 12:19:48.035: INFO: Waiting up to 5m0s for pod "pod-ce945686-435a-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-h6cn5" to be "success or failure"
Jan 30 12:19:48.045: INFO: Pod "pod-ce945686-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.953721ms
Jan 30 12:19:50.314: INFO: Pod "pod-ce945686-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278411687s
Jan 30 12:19:52.335: INFO: Pod "pod-ce945686-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300123221s
Jan 30 12:19:54.350: INFO: Pod "pod-ce945686-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314974775s
Jan 30 12:19:56.418: INFO: Pod "pod-ce945686-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.383093598s
Jan 30 12:19:58.576: INFO: Pod "pod-ce945686-435a-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.540919404s
STEP: Saw pod success
Jan 30 12:19:58.577: INFO: Pod "pod-ce945686-435a-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:19:58.597: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ce945686-435a-11ea-a47a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 30 12:19:58.760: INFO: Waiting for pod pod-ce945686-435a-11ea-a47a-0242ac110005 to disappear
Jan 30 12:19:58.866: INFO: Pod pod-ce945686-435a-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:19:58.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h6cn5" for this suite.
Jan 30 12:20:04.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:20:05.072: INFO: namespace: e2e-tests-emptydir-h6cn5, resource: bindings, ignored listing per whitelist
Jan 30 12:20:05.094: INFO: namespace e2e-tests-emptydir-h6cn5 deletion completed in 6.22049942s

• [SLOW TEST:17.561 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:20:05.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 30 12:20:05.402: INFO: Waiting up to 5m0s for pod "pod-d8deab25-435a-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-7gjcn" to be "success or failure"
Jan 30 12:20:05.451: INFO: Pod "pod-d8deab25-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.76903ms
Jan 30 12:20:07.466: INFO: Pod "pod-d8deab25-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062770886s
Jan 30 12:20:09.490: INFO: Pod "pod-d8deab25-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0871791s
Jan 30 12:20:11.813: INFO: Pod "pod-d8deab25-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410124105s
Jan 30 12:20:13.835: INFO: Pod "pod-d8deab25-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432400521s
Jan 30 12:20:15.865: INFO: Pod "pod-d8deab25-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.46221907s
Jan 30 12:20:17.899: INFO: Pod "pod-d8deab25-435a-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.496068634s
STEP: Saw pod success
Jan 30 12:20:17.899: INFO: Pod "pod-d8deab25-435a-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:20:17.905: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d8deab25-435a-11ea-a47a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 30 12:20:18.719: INFO: Waiting for pod pod-d8deab25-435a-11ea-a47a-0242ac110005 to disappear
Jan 30 12:20:18.734: INFO: Pod pod-d8deab25-435a-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:20:18.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7gjcn" for this suite.
Jan 30 12:20:24.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:20:25.231: INFO: namespace: e2e-tests-emptydir-7gjcn, resource: bindings, ignored listing per whitelist
Jan 30 12:20:25.287: INFO: namespace e2e-tests-emptydir-7gjcn deletion completed in 6.527669144s

• [SLOW TEST:20.193 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:20:25.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 30 12:20:25.465: INFO: Waiting up to 5m0s for pod "downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-5lg6s" to be "success or failure"
Jan 30 12:20:25.493: INFO: Pod "downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.214877ms
Jan 30 12:20:27.850: INFO: Pod "downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38482302s
Jan 30 12:20:29.869: INFO: Pod "downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403683598s
Jan 30 12:20:31.921: INFO: Pod "downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.45556393s
Jan 30 12:20:33.966: INFO: Pod "downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.500928737s
Jan 30 12:20:35.985: INFO: Pod "downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.519998762s
STEP: Saw pod success
Jan 30 12:20:35.986: INFO: Pod "downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:20:35.994: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 30 12:20:36.590: INFO: Waiting for pod downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005 to disappear
Jan 30 12:20:36.797: INFO: Pod downward-api-e4e4e9e7-435a-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:20:36.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5lg6s" for this suite.
Jan 30 12:20:44.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:20:45.021: INFO: namespace: e2e-tests-downward-api-5lg6s, resource: bindings, ignored listing per whitelist
Jan 30 12:20:45.092: INFO: namespace e2e-tests-downward-api-5lg6s deletion completed in 8.281980044s

• [SLOW TEST:19.804 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:20:45.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-v7mzv
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 30 12:20:45.327: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 30 12:21:15.602: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-v7mzv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 12:21:15.603: INFO: >>> kubeConfig: /root/.kube/config
I0130 12:21:15.781059       8 log.go:172] (0xc000570580) (0xc001524320) Create stream
I0130 12:21:15.781392       8 log.go:172] (0xc000570580) (0xc001524320) Stream added, broadcasting: 1
I0130 12:21:15.791000       8 log.go:172] (0xc000570580) Reply frame received for 1
I0130 12:21:15.791235       8 log.go:172] (0xc000570580) (0xc001af10e0) Create stream
I0130 12:21:15.791257       8 log.go:172] (0xc000570580) (0xc001af10e0) Stream added, broadcasting: 3
I0130 12:21:15.794457       8 log.go:172] (0xc000570580) Reply frame received for 3
I0130 12:21:15.794516       8 log.go:172] (0xc000570580) (0xc0015243c0) Create stream
I0130 12:21:15.794531       8 log.go:172] (0xc000570580) (0xc0015243c0) Stream added, broadcasting: 5
I0130 12:21:15.795986       8 log.go:172] (0xc000570580) Reply frame received for 5
I0130 12:21:16.014131       8 log.go:172] (0xc000570580) Data frame received for 3
I0130 12:21:16.014313       8 log.go:172] (0xc001af10e0) (3) Data frame handling
I0130 12:21:16.014360       8 log.go:172] (0xc001af10e0) (3) Data frame sent
I0130 12:21:16.152755       8 log.go:172] (0xc000570580) Data frame received for 1
I0130 12:21:16.153030       8 log.go:172] (0xc000570580) (0xc001af10e0) Stream removed, broadcasting: 3
I0130 12:21:16.153151       8 log.go:172] (0xc001524320) (1) Data frame handling
I0130 12:21:16.153387       8 log.go:172] (0xc000570580) (0xc0015243c0) Stream removed, broadcasting: 5
I0130 12:21:16.153613       8 log.go:172] (0xc001524320) (1) Data frame sent
I0130 12:21:16.153703       8 log.go:172] (0xc000570580) (0xc001524320) Stream removed, broadcasting: 1
I0130 12:21:16.153742       8 log.go:172] (0xc000570580) Go away received
I0130 12:21:16.154379       8 log.go:172] (0xc000570580) (0xc001524320) Stream removed, broadcasting: 1
I0130 12:21:16.154425       8 log.go:172] (0xc000570580) (0xc001af10e0) Stream removed, broadcasting: 3
I0130 12:21:16.154449       8 log.go:172] (0xc000570580) (0xc0015243c0) Stream removed, broadcasting: 5
Jan 30 12:21:16.154: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:21:16.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-v7mzv" for this suite.
Jan 30 12:21:40.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:21:40.377: INFO: namespace: e2e-tests-pod-network-test-v7mzv, resource: bindings, ignored listing per whitelist
Jan 30 12:21:40.398: INFO: namespace e2e-tests-pod-network-test-v7mzv deletion completed in 24.218752774s

• [SLOW TEST:55.305 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:21:40.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-11c3f8ff-435b-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 30 12:21:40.764: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-11c5a6db-435b-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-7c78c" to be "success or failure"
Jan 30 12:21:40.814: INFO: Pod "pod-projected-configmaps-11c5a6db-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.047971ms
Jan 30 12:21:42.893: INFO: Pod "pod-projected-configmaps-11c5a6db-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128618996s
Jan 30 12:21:44.914: INFO: Pod "pod-projected-configmaps-11c5a6db-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149757223s
Jan 30 12:21:46.966: INFO: Pod "pod-projected-configmaps-11c5a6db-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202396852s
Jan 30 12:21:48.989: INFO: Pod "pod-projected-configmaps-11c5a6db-435b-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.225140322s
STEP: Saw pod success
Jan 30 12:21:48.989: INFO: Pod "pod-projected-configmaps-11c5a6db-435b-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:21:48.996: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-11c5a6db-435b-11ea-a47a-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 12:21:49.210: INFO: Waiting for pod pod-projected-configmaps-11c5a6db-435b-11ea-a47a-0242ac110005 to disappear
Jan 30 12:21:49.244: INFO: Pod pod-projected-configmaps-11c5a6db-435b-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:21:49.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7c78c" for this suite.
Jan 30 12:21:55.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:21:55.565: INFO: namespace: e2e-tests-projected-7c78c, resource: bindings, ignored listing per whitelist
Jan 30 12:21:55.567: INFO: namespace e2e-tests-projected-7c78c deletion completed in 6.308076684s

• [SLOW TEST:15.169 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:21:55.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 30 12:21:55.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:21:56.211: INFO: stderr: ""
Jan 30 12:21:56.211: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 12:21:56.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:21:56.414: INFO: stderr: ""
Jan 30 12:21:56.414: INFO: stdout: "update-demo-nautilus-6s4g5 update-demo-nautilus-8w27j "
Jan 30 12:21:56.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6s4g5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:21:56.553: INFO: stderr: ""
Jan 30 12:21:56.554: INFO: stdout: ""
Jan 30 12:21:56.554: INFO: update-demo-nautilus-6s4g5 is created but not running
Jan 30 12:22:01.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:02.261: INFO: stderr: ""
Jan 30 12:22:02.261: INFO: stdout: "update-demo-nautilus-6s4g5 update-demo-nautilus-8w27j "
Jan 30 12:22:02.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6s4g5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:02.470: INFO: stderr: ""
Jan 30 12:22:02.470: INFO: stdout: ""
Jan 30 12:22:02.471: INFO: update-demo-nautilus-6s4g5 is created but not running
Jan 30 12:22:07.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:07.721: INFO: stderr: ""
Jan 30 12:22:07.721: INFO: stdout: "update-demo-nautilus-6s4g5 update-demo-nautilus-8w27j "
Jan 30 12:22:07.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6s4g5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:07.868: INFO: stderr: ""
Jan 30 12:22:07.869: INFO: stdout: "true"
Jan 30 12:22:07.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6s4g5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:08.020: INFO: stderr: ""
Jan 30 12:22:08.021: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 12:22:08.021: INFO: validating pod update-demo-nautilus-6s4g5
Jan 30 12:22:08.066: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 12:22:08.067: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 12:22:08.067: INFO: update-demo-nautilus-6s4g5 is verified up and running
Jan 30 12:22:08.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8w27j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:08.210: INFO: stderr: ""
Jan 30 12:22:08.211: INFO: stdout: "true"
Jan 30 12:22:08.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8w27j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:08.362: INFO: stderr: ""
Jan 30 12:22:08.362: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 12:22:08.362: INFO: validating pod update-demo-nautilus-8w27j
Jan 30 12:22:08.374: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 12:22:08.374: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 12:22:08.374: INFO: update-demo-nautilus-8w27j is verified up and running
STEP: scaling down the replication controller
Jan 30 12:22:08.378: INFO: scanned /root for discovery docs: 
Jan 30 12:22:08.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:09.764: INFO: stderr: ""
Jan 30 12:22:09.764: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 12:22:09.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:09.996: INFO: stderr: ""
Jan 30 12:22:09.997: INFO: stdout: "update-demo-nautilus-6s4g5 update-demo-nautilus-8w27j "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 30 12:22:14.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:15.184: INFO: stderr: ""
Jan 30 12:22:15.184: INFO: stdout: "update-demo-nautilus-8w27j "
Jan 30 12:22:15.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8w27j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:15.291: INFO: stderr: ""
Jan 30 12:22:15.291: INFO: stdout: "true"
Jan 30 12:22:15.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8w27j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:15.486: INFO: stderr: ""
Jan 30 12:22:15.486: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 12:22:15.487: INFO: validating pod update-demo-nautilus-8w27j
Jan 30 12:22:15.501: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 12:22:15.501: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 12:22:15.501: INFO: update-demo-nautilus-8w27j is verified up and running
STEP: scaling up the replication controller
Jan 30 12:22:15.503: INFO: scanned /root for discovery docs: 
Jan 30 12:22:15.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:16.845: INFO: stderr: ""
Jan 30 12:22:16.845: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 12:22:16.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:17.035: INFO: stderr: ""
Jan 30 12:22:17.035: INFO: stdout: "update-demo-nautilus-8w27j update-demo-nautilus-z5znm "
Jan 30 12:22:17.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8w27j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:17.185: INFO: stderr: ""
Jan 30 12:22:17.185: INFO: stdout: "true"
Jan 30 12:22:17.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8w27j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:17.341: INFO: stderr: ""
Jan 30 12:22:17.341: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 12:22:17.341: INFO: validating pod update-demo-nautilus-8w27j
Jan 30 12:22:17.355: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 12:22:17.355: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 12:22:17.355: INFO: update-demo-nautilus-8w27j is verified up and running
Jan 30 12:22:17.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z5znm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:17.561: INFO: stderr: ""
Jan 30 12:22:17.562: INFO: stdout: ""
Jan 30 12:22:17.562: INFO: update-demo-nautilus-z5znm is created but not running
Jan 30 12:22:22.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:22.838: INFO: stderr: ""
Jan 30 12:22:22.838: INFO: stdout: "update-demo-nautilus-8w27j update-demo-nautilus-z5znm "
Jan 30 12:22:22.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8w27j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:22.982: INFO: stderr: ""
Jan 30 12:22:22.982: INFO: stdout: "true"
Jan 30 12:22:22.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8w27j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:23.144: INFO: stderr: ""
Jan 30 12:22:23.144: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 12:22:23.144: INFO: validating pod update-demo-nautilus-8w27j
Jan 30 12:22:23.165: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 12:22:23.165: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 12:22:23.166: INFO: update-demo-nautilus-8w27j is verified up and running
Jan 30 12:22:23.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z5znm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:23.282: INFO: stderr: ""
Jan 30 12:22:23.283: INFO: stdout: ""
Jan 30 12:22:23.283: INFO: update-demo-nautilus-z5znm is created but not running
Jan 30 12:22:28.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:28.509: INFO: stderr: ""
Jan 30 12:22:28.509: INFO: stdout: "update-demo-nautilus-8w27j update-demo-nautilus-z5znm "
Jan 30 12:22:28.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8w27j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:28.651: INFO: stderr: ""
Jan 30 12:22:28.651: INFO: stdout: "true"
Jan 30 12:22:28.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8w27j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:28.773: INFO: stderr: ""
Jan 30 12:22:28.773: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 12:22:28.774: INFO: validating pod update-demo-nautilus-8w27j
Jan 30 12:22:28.781: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 12:22:28.781: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 12:22:28.781: INFO: update-demo-nautilus-8w27j is verified up and running
Jan 30 12:22:28.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z5znm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:28.929: INFO: stderr: ""
Jan 30 12:22:28.930: INFO: stdout: "true"
Jan 30 12:22:28.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z5znm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:29.054: INFO: stderr: ""
Jan 30 12:22:29.054: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 12:22:29.054: INFO: validating pod update-demo-nautilus-z5znm
Jan 30 12:22:29.074: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 12:22:29.075: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 12:22:29.075: INFO: update-demo-nautilus-z5znm is verified up and running
STEP: using delete to clean up resources
Jan 30 12:22:29.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:29.185: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 12:22:29.186: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 30 12:22:29.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-mlzn7'
Jan 30 12:22:29.483: INFO: stderr: "No resources found.\n"
Jan 30 12:22:29.483: INFO: stdout: ""
Jan 30 12:22:29.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-mlzn7 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 30 12:22:29.643: INFO: stderr: ""
Jan 30 12:22:29.644: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:22:29.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mlzn7" for this suite.
Jan 30 12:22:53.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:22:53.967: INFO: namespace: e2e-tests-kubectl-mlzn7, resource: bindings, ignored listing per whitelist
Jan 30 12:22:54.058: INFO: namespace e2e-tests-kubectl-mlzn7 deletion completed in 24.39484022s

• [SLOW TEST:58.491 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:22:54.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-kc97x
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-kc97x to expose endpoints map[]
Jan 30 12:22:54.697: INFO: Get endpoints failed (45.624954ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 30 12:22:55.787: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-kc97x exposes endpoints map[] (1.135999957s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-kc97x
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-kc97x to expose endpoints map[pod1:[80]]
Jan 30 12:23:02.456: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (6.646270756s elapsed, will retry)
Jan 30 12:23:05.689: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-kc97x exposes endpoints map[pod1:[80]] (9.879167114s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-kc97x
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-kc97x to expose endpoints map[pod1:[80] pod2:[80]]
Jan 30 12:23:10.471: INFO: Unexpected endpoints: found map[3e822e19-435b-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.77222012s elapsed, will retry)
Jan 30 12:23:14.702: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-kc97x exposes endpoints map[pod1:[80] pod2:[80]] (9.002705322s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-kc97x
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-kc97x to expose endpoints map[pod2:[80]]
Jan 30 12:23:14.834: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-kc97x exposes endpoints map[pod2:[80]] (107.871383ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-kc97x
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-kc97x to expose endpoints map[]
Jan 30 12:23:15.952: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-kc97x exposes endpoints map[] (1.10027224s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:23:16.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-kc97x" for this suite.
Jan 30 12:23:40.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:23:40.275: INFO: namespace: e2e-tests-services-kc97x, resource: bindings, ignored listing per whitelist
Jan 30 12:23:40.299: INFO: namespace e2e-tests-services-kc97x deletion completed in 24.205424391s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:46.240 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:23:40.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-592b86c1-435b-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 30 12:23:40.584: INFO: Waiting up to 5m0s for pod "pod-secrets-592e7601-435b-11ea-a47a-0242ac110005" in namespace "e2e-tests-secrets-bfjdz" to be "success or failure"
Jan 30 12:23:40.738: INFO: Pod "pod-secrets-592e7601-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 154.622357ms
Jan 30 12:23:42.761: INFO: Pod "pod-secrets-592e7601-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177337934s
Jan 30 12:23:44.775: INFO: Pod "pod-secrets-592e7601-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191295258s
Jan 30 12:23:47.168: INFO: Pod "pod-secrets-592e7601-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584591996s
Jan 30 12:23:49.199: INFO: Pod "pod-secrets-592e7601-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.615213812s
Jan 30 12:23:51.217: INFO: Pod "pod-secrets-592e7601-435b-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.633587155s
STEP: Saw pod success
Jan 30 12:23:51.218: INFO: Pod "pod-secrets-592e7601-435b-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:23:51.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-592e7601-435b-11ea-a47a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 30 12:23:51.354: INFO: Waiting for pod pod-secrets-592e7601-435b-11ea-a47a-0242ac110005 to disappear
Jan 30 12:23:51.485: INFO: Pod pod-secrets-592e7601-435b-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:23:51.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bfjdz" for this suite.
Jan 30 12:23:59.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:23:59.627: INFO: namespace: e2e-tests-secrets-bfjdz, resource: bindings, ignored listing per whitelist
Jan 30 12:23:59.694: INFO: namespace e2e-tests-secrets-bfjdz deletion completed in 8.175264449s

• [SLOW TEST:19.394 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:23:59.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 30 12:23:59.849: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 30 12:23:59.907: INFO: Waiting for terminating namespaces to be deleted...
Jan 30 12:23:59.911: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 30 12:23:59.930: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 30 12:23:59.930: INFO: 	Container weave ready: true, restart count 0
Jan 30 12:23:59.930: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 12:23:59.930: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 30 12:23:59.930: INFO: 	Container coredns ready: true, restart count 0
Jan 30 12:23:59.930: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 30 12:23:59.930: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 30 12:23:59.930: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 30 12:23:59.930: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 30 12:23:59.930: INFO: 	Container coredns ready: true, restart count 0
Jan 30 12:23:59.930: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 30 12:23:59.930: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 12:23:59.930: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-6ac21692-435b-11ea-a47a-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-6ac21692-435b-11ea-a47a-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-6ac21692-435b-11ea-a47a-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:24:22.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-26wdg" for this suite.
Jan 30 12:24:46.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:24:46.801: INFO: namespace: e2e-tests-sched-pred-26wdg, resource: bindings, ignored listing per whitelist
Jan 30 12:24:46.864: INFO: namespace e2e-tests-sched-pred-26wdg deletion completed in 24.366323942s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:47.170 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:24:46.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 30 12:24:47.178: INFO: Waiting up to 5m0s for pod "pod-80e079c6-435b-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-g4gwm" to be "success or failure"
Jan 30 12:24:47.191: INFO: Pod "pod-80e079c6-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.339042ms
Jan 30 12:24:49.207: INFO: Pod "pod-80e079c6-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0284985s
Jan 30 12:24:51.224: INFO: Pod "pod-80e079c6-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045375172s
Jan 30 12:24:53.239: INFO: Pod "pod-80e079c6-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06060186s
Jan 30 12:24:55.273: INFO: Pod "pod-80e079c6-435b-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09462475s
Jan 30 12:24:57.377: INFO: Pod "pod-80e079c6-435b-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.198730005s
STEP: Saw pod success
Jan 30 12:24:57.377: INFO: Pod "pod-80e079c6-435b-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:24:57.386: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-80e079c6-435b-11ea-a47a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 30 12:24:57.536: INFO: Waiting for pod pod-80e079c6-435b-11ea-a47a-0242ac110005 to disappear
Jan 30 12:24:57.556: INFO: Pod pod-80e079c6-435b-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:24:57.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-g4gwm" for this suite.
Jan 30 12:25:03.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:25:03.799: INFO: namespace: e2e-tests-emptydir-g4gwm, resource: bindings, ignored listing per whitelist
Jan 30 12:25:03.937: INFO: namespace e2e-tests-emptydir-g4gwm deletion completed in 6.366869648s

• [SLOW TEST:17.072 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:25:03.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 30 12:25:04.327: INFO: Number of nodes with available pods: 0
Jan 30 12:25:04.327: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:05.698: INFO: Number of nodes with available pods: 0
Jan 30 12:25:05.698: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:06.592: INFO: Number of nodes with available pods: 0
Jan 30 12:25:06.592: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:07.363: INFO: Number of nodes with available pods: 0
Jan 30 12:25:07.363: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:08.508: INFO: Number of nodes with available pods: 0
Jan 30 12:25:08.508: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:09.986: INFO: Number of nodes with available pods: 0
Jan 30 12:25:09.987: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:10.352: INFO: Number of nodes with available pods: 0
Jan 30 12:25:10.352: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:11.363: INFO: Number of nodes with available pods: 0
Jan 30 12:25:11.363: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:12.346: INFO: Number of nodes with available pods: 0
Jan 30 12:25:12.346: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:13.463: INFO: Number of nodes with available pods: 1
Jan 30 12:25:13.463: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 30 12:25:13.694: INFO: Number of nodes with available pods: 0
Jan 30 12:25:13.694: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:14.743: INFO: Number of nodes with available pods: 0
Jan 30 12:25:14.743: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:15.990: INFO: Number of nodes with available pods: 0
Jan 30 12:25:15.990: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:16.841: INFO: Number of nodes with available pods: 0
Jan 30 12:25:16.841: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:18.166: INFO: Number of nodes with available pods: 0
Jan 30 12:25:18.167: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:18.721: INFO: Number of nodes with available pods: 0
Jan 30 12:25:18.721: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:19.731: INFO: Number of nodes with available pods: 0
Jan 30 12:25:19.731: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:21.489: INFO: Number of nodes with available pods: 0
Jan 30 12:25:21.489: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:22.022: INFO: Number of nodes with available pods: 0
Jan 30 12:25:22.022: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:22.737: INFO: Number of nodes with available pods: 0
Jan 30 12:25:22.737: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:23.739: INFO: Number of nodes with available pods: 0
Jan 30 12:25:23.739: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:24.748: INFO: Number of nodes with available pods: 0
Jan 30 12:25:24.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 12:25:25.718: INFO: Number of nodes with available pods: 1
Jan 30 12:25:25.718: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zmxhh, will wait for the garbage collector to delete the pods
Jan 30 12:25:25.879: INFO: Deleting DaemonSet.extensions daemon-set took: 94.76667ms
Jan 30 12:25:26.081: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.424187ms
Jan 30 12:25:42.620: INFO: Number of nodes with available pods: 0
Jan 30 12:25:42.620: INFO: Number of running nodes: 0, number of available pods: 0
Jan 30 12:25:42.634: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zmxhh/daemonsets","resourceVersion":"19971125"},"items":null}

Jan 30 12:25:42.642: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zmxhh/pods","resourceVersion":"19971125"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:25:42.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-zmxhh" for this suite.
Jan 30 12:25:48.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:25:49.025: INFO: namespace: e2e-tests-daemonsets-zmxhh, resource: bindings, ignored listing per whitelist
Jan 30 12:25:49.089: INFO: namespace e2e-tests-daemonsets-zmxhh deletion completed in 6.384470649s

• [SLOW TEST:45.151 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:25:49.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0130 12:26:29.873830       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 12:26:29.874: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:26:29.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-v67t6" for this suite.
Jan 30 12:26:40.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:26:41.117: INFO: namespace: e2e-tests-gc-v67t6, resource: bindings, ignored listing per whitelist
Jan 30 12:26:41.312: INFO: namespace e2e-tests-gc-v67t6 deletion completed in 11.431880834s

• [SLOW TEST:52.221 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:26:41.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-c56a75d8-435b-11ea-a47a-0242ac110005
STEP: Creating secret with name s-test-opt-upd-c56a79bb-435b-11ea-a47a-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c56a75d8-435b-11ea-a47a-0242ac110005
STEP: Updating secret s-test-opt-upd-c56a79bb-435b-11ea-a47a-0242ac110005
STEP: Creating secret with name s-test-opt-create-c56a7ada-435b-11ea-a47a-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:27:09.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9vd2c" for this suite.
Jan 30 12:27:33.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:27:33.483: INFO: namespace: e2e-tests-projected-9vd2c, resource: bindings, ignored listing per whitelist
Jan 30 12:27:33.568: INFO: namespace e2e-tests-projected-9vd2c deletion completed in 24.198255002s

• [SLOW TEST:52.254 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:27:33.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 30 12:27:52.252: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 30 12:27:52.274: INFO: Pod pod-with-poststart-http-hook still exists
Jan 30 12:27:54.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 30 12:27:54.286: INFO: Pod pod-with-poststart-http-hook still exists
Jan 30 12:27:56.275: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 30 12:27:56.294: INFO: Pod pod-with-poststart-http-hook still exists
Jan 30 12:27:58.275: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 30 12:27:58.293: INFO: Pod pod-with-poststart-http-hook still exists
Jan 30 12:28:00.275: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 30 12:28:00.294: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:28:00.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-wmtb8" for this suite.
Jan 30 12:28:40.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:28:40.631: INFO: namespace: e2e-tests-container-lifecycle-hook-wmtb8, resource: bindings, ignored listing per whitelist
Jan 30 12:28:40.701: INFO: namespace e2e-tests-container-lifecycle-hook-wmtb8 deletion completed in 40.394610107s

• [SLOW TEST:67.133 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:28:40.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 30 12:28:49.625: INFO: Successfully updated pod "labelsupdate0c3f16e2-435c-11ea-a47a-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:28:51.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nmv9h" for this suite.
Jan 30 12:29:17.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:29:18.028: INFO: namespace: e2e-tests-projected-nmv9h, resource: bindings, ignored listing per whitelist
Jan 30 12:29:18.064: INFO: namespace e2e-tests-projected-nmv9h deletion completed in 26.262465576s

• [SLOW TEST:37.363 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:29:18.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan 30 12:29:18.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 30 12:29:19.916: INFO: stderr: ""
Jan 30 12:29:19.917: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:29:19.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rxglz" for this suite.
Jan 30 12:29:25.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:29:26.162: INFO: namespace: e2e-tests-kubectl-rxglz, resource: bindings, ignored listing per whitelist
Jan 30 12:29:26.185: INFO: namespace e2e-tests-kubectl-rxglz deletion completed in 6.257361744s

• [SLOW TEST:8.121 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:29:26.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan 30 12:29:26.604: INFO: Waiting up to 5m0s for pod "client-containers-275448bb-435c-11ea-a47a-0242ac110005" in namespace "e2e-tests-containers-xn2mg" to be "success or failure"
Jan 30 12:29:26.641: INFO: Pod "client-containers-275448bb-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.578068ms
Jan 30 12:29:28.662: INFO: Pod "client-containers-275448bb-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057277957s
Jan 30 12:29:30.679: INFO: Pod "client-containers-275448bb-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074896829s
Jan 30 12:29:32.701: INFO: Pod "client-containers-275448bb-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096943162s
Jan 30 12:29:34.721: INFO: Pod "client-containers-275448bb-435c-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116245259s
STEP: Saw pod success
Jan 30 12:29:34.721: INFO: Pod "client-containers-275448bb-435c-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:29:34.731: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-275448bb-435c-11ea-a47a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 30 12:29:34.835: INFO: Waiting for pod client-containers-275448bb-435c-11ea-a47a-0242ac110005 to disappear
Jan 30 12:29:35.020: INFO: Pod client-containers-275448bb-435c-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:29:35.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-xn2mg" for this suite.
Jan 30 12:29:43.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:29:43.523: INFO: namespace: e2e-tests-containers-xn2mg, resource: bindings, ignored listing per whitelist
Jan 30 12:29:43.538: INFO: namespace e2e-tests-containers-xn2mg deletion completed in 8.509199178s

• [SLOW TEST:17.352 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:29:43.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 30 12:29:54.403: INFO: Successfully updated pod "annotationupdate319de18b-435c-11ea-a47a-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:29:56.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z844r" for this suite.
Jan 30 12:30:20.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:30:20.805: INFO: namespace: e2e-tests-projected-z844r, resource: bindings, ignored listing per whitelist
Jan 30 12:30:20.838: INFO: namespace e2e-tests-projected-z844r deletion completed in 24.213439486s

• [SLOW TEST:37.300 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:30:20.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-47e0eeb9-435c-11ea-a47a-0242ac110005
Jan 30 12:30:21.048: INFO: Pod name my-hostname-basic-47e0eeb9-435c-11ea-a47a-0242ac110005: Found 0 pods out of 1
Jan 30 12:30:26.081: INFO: Pod name my-hostname-basic-47e0eeb9-435c-11ea-a47a-0242ac110005: Found 1 pods out of 1
Jan 30 12:30:26.081: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-47e0eeb9-435c-11ea-a47a-0242ac110005" are running
Jan 30 12:30:32.107: INFO: Pod "my-hostname-basic-47e0eeb9-435c-11ea-a47a-0242ac110005-xz52p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 12:30:21 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 12:30:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-47e0eeb9-435c-11ea-a47a-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 12:30:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-47e0eeb9-435c-11ea-a47a-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 12:30:21 +0000 UTC Reason: Message:}])
Jan 30 12:30:32.108: INFO: Trying to dial the pod
Jan 30 12:30:37.178: INFO: Controller my-hostname-basic-47e0eeb9-435c-11ea-a47a-0242ac110005: Got expected result from replica 1 [my-hostname-basic-47e0eeb9-435c-11ea-a47a-0242ac110005-xz52p]: "my-hostname-basic-47e0eeb9-435c-11ea-a47a-0242ac110005-xz52p", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:30:37.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-vfwqk" for this suite.
Jan 30 12:30:45.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:30:45.406: INFO: namespace: e2e-tests-replication-controller-vfwqk, resource: bindings, ignored listing per whitelist
Jan 30 12:30:45.451: INFO: namespace e2e-tests-replication-controller-vfwqk deletion completed in 8.239537053s

• [SLOW TEST:24.613 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:30:45.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan 30 12:30:46.312: INFO: Waiting up to 5m0s for pod "client-containers-56ee9484-435c-11ea-a47a-0242ac110005" in namespace "e2e-tests-containers-2zxmc" to be "success or failure"
Jan 30 12:30:46.453: INFO: Pod "client-containers-56ee9484-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 140.472502ms
Jan 30 12:30:48.736: INFO: Pod "client-containers-56ee9484-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.423691157s
Jan 30 12:30:50.753: INFO: Pod "client-containers-56ee9484-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440053796s
Jan 30 12:30:53.126: INFO: Pod "client-containers-56ee9484-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.813149077s
Jan 30 12:30:55.143: INFO: Pod "client-containers-56ee9484-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.830639882s
Jan 30 12:30:57.210: INFO: Pod "client-containers-56ee9484-435c-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.897127573s
STEP: Saw pod success
Jan 30 12:30:57.210: INFO: Pod "client-containers-56ee9484-435c-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:30:57.235: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-56ee9484-435c-11ea-a47a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 30 12:30:57.414: INFO: Waiting for pod client-containers-56ee9484-435c-11ea-a47a-0242ac110005 to disappear
Jan 30 12:30:57.575: INFO: Pod client-containers-56ee9484-435c-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:30:57.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-2zxmc" for this suite.
Jan 30 12:31:03.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:31:03.657: INFO: namespace: e2e-tests-containers-2zxmc, resource: bindings, ignored listing per whitelist
Jan 30 12:31:03.763: INFO: namespace e2e-tests-containers-2zxmc deletion completed in 6.179841646s

• [SLOW TEST:18.311 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:31:03.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6180aa76-435c-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 30 12:31:04.043: INFO: Waiting up to 5m0s for pod "pod-secrets-6182be15-435c-11ea-a47a-0242ac110005" in namespace "e2e-tests-secrets-pm8r8" to be "success or failure"
Jan 30 12:31:04.059: INFO: Pod "pod-secrets-6182be15-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.265413ms
Jan 30 12:31:06.083: INFO: Pod "pod-secrets-6182be15-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039613071s
Jan 30 12:31:08.110: INFO: Pod "pod-secrets-6182be15-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067171143s
Jan 30 12:31:10.128: INFO: Pod "pod-secrets-6182be15-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085048974s
Jan 30 12:31:12.159: INFO: Pod "pod-secrets-6182be15-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115862254s
Jan 30 12:31:14.192: INFO: Pod "pod-secrets-6182be15-435c-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148499277s
STEP: Saw pod success
Jan 30 12:31:14.192: INFO: Pod "pod-secrets-6182be15-435c-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:31:14.201: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6182be15-435c-11ea-a47a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 30 12:31:14.290: INFO: Waiting for pod pod-secrets-6182be15-435c-11ea-a47a-0242ac110005 to disappear
Jan 30 12:31:14.315: INFO: Pod pod-secrets-6182be15-435c-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:31:14.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-pm8r8" for this suite.
Jan 30 12:31:20.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:31:20.585: INFO: namespace: e2e-tests-secrets-pm8r8, resource: bindings, ignored listing per whitelist
Jan 30 12:31:20.682: INFO: namespace e2e-tests-secrets-pm8r8 deletion completed in 6.344960012s

• [SLOW TEST:16.918 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:31:20.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rrtbd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 30 12:31:20.861: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 30 12:31:53.115: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-rrtbd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 12:31:53.116: INFO: >>> kubeConfig: /root/.kube/config
I0130 12:31:53.193856       8 log.go:172] (0xc0022042c0) (0xc001bb5540) Create stream
I0130 12:31:53.193998       8 log.go:172] (0xc0022042c0) (0xc001bb5540) Stream added, broadcasting: 1
I0130 12:31:53.200032       8 log.go:172] (0xc0022042c0) Reply frame received for 1
I0130 12:31:53.200061       8 log.go:172] (0xc0022042c0) (0xc001bc9540) Create stream
I0130 12:31:53.200069       8 log.go:172] (0xc0022042c0) (0xc001bc9540) Stream added, broadcasting: 3
I0130 12:31:53.200881       8 log.go:172] (0xc0022042c0) Reply frame received for 3
I0130 12:31:53.200904       8 log.go:172] (0xc0022042c0) (0xc0017370e0) Create stream
I0130 12:31:53.200914       8 log.go:172] (0xc0022042c0) (0xc0017370e0) Stream added, broadcasting: 5
I0130 12:31:53.201583       8 log.go:172] (0xc0022042c0) Reply frame received for 5
I0130 12:31:53.352046       8 log.go:172] (0xc0022042c0) Data frame received for 3
I0130 12:31:53.352146       8 log.go:172] (0xc001bc9540) (3) Data frame handling
I0130 12:31:53.352216       8 log.go:172] (0xc001bc9540) (3) Data frame sent
I0130 12:31:53.511925       8 log.go:172] (0xc0022042c0) Data frame received for 1
I0130 12:31:53.512093       8 log.go:172] (0xc001bb5540) (1) Data frame handling
I0130 12:31:53.512137       8 log.go:172] (0xc001bb5540) (1) Data frame sent
I0130 12:31:53.512179       8 log.go:172] (0xc0022042c0) (0xc001bb5540) Stream removed, broadcasting: 1
I0130 12:31:53.513951       8 log.go:172] (0xc0022042c0) (0xc0017370e0) Stream removed, broadcasting: 5
I0130 12:31:53.514084       8 log.go:172] (0xc0022042c0) (0xc001bc9540) Stream removed, broadcasting: 3
I0130 12:31:53.514159       8 log.go:172] (0xc0022042c0) Go away received
I0130 12:31:53.514198       8 log.go:172] (0xc0022042c0) (0xc001bb5540) Stream removed, broadcasting: 1
I0130 12:31:53.514216       8 log.go:172] (0xc0022042c0) (0xc001bc9540) Stream removed, broadcasting: 3
I0130 12:31:53.514233       8 log.go:172] (0xc0022042c0) (0xc0017370e0) Stream removed, broadcasting: 5
Jan 30 12:31:53.514: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:31:53.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-rrtbd" for this suite.
Jan 30 12:32:17.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:32:17.802: INFO: namespace: e2e-tests-pod-network-test-rrtbd, resource: bindings, ignored listing per whitelist
Jan 30 12:32:17.830: INFO: namespace e2e-tests-pod-network-test-rrtbd deletion completed in 24.234480582s

• [SLOW TEST:57.148 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:32:17.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 30 12:32:18.092: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-zqzzw" to be "success or failure"
Jan 30 12:32:18.116: INFO: Pod "downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.562608ms
Jan 30 12:32:20.145: INFO: Pod "downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052794723s
Jan 30 12:32:22.177: INFO: Pod "downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084973916s
Jan 30 12:32:24.196: INFO: Pod "downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10459291s
Jan 30 12:32:26.216: INFO: Pod "downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124020778s
Jan 30 12:32:28.231: INFO: Pod "downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139084499s
STEP: Saw pod success
Jan 30 12:32:28.231: INFO: Pod "downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:32:28.235: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 30 12:32:28.996: INFO: Waiting for pod downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005 to disappear
Jan 30 12:32:29.153: INFO: Pod downwardapi-volume-8da4b5a8-435c-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:32:29.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zqzzw" for this suite.
Jan 30 12:32:35.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:32:35.394: INFO: namespace: e2e-tests-downward-api-zqzzw, resource: bindings, ignored listing per whitelist
Jan 30 12:32:35.477: INFO: namespace e2e-tests-downward-api-zqzzw deletion completed in 6.306661617s

• [SLOW TEST:17.647 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:32:35.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 30 12:32:35.719: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-a,UID:981ffb92-435c-11ea-a994-fa163e34d433,ResourceVersion:19972142,Generation:0,CreationTimestamp:2020-01-30 12:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 30 12:32:35.720: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-a,UID:981ffb92-435c-11ea-a994-fa163e34d433,ResourceVersion:19972142,Generation:0,CreationTimestamp:2020-01-30 12:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 30 12:32:45.756: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-a,UID:981ffb92-435c-11ea-a994-fa163e34d433,ResourceVersion:19972155,Generation:0,CreationTimestamp:2020-01-30 12:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 30 12:32:45.757: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-a,UID:981ffb92-435c-11ea-a994-fa163e34d433,ResourceVersion:19972155,Generation:0,CreationTimestamp:2020-01-30 12:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 30 12:32:55.787: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-a,UID:981ffb92-435c-11ea-a994-fa163e34d433,ResourceVersion:19972167,Generation:0,CreationTimestamp:2020-01-30 12:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 30 12:32:55.788: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-a,UID:981ffb92-435c-11ea-a994-fa163e34d433,ResourceVersion:19972167,Generation:0,CreationTimestamp:2020-01-30 12:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 30 12:33:05.813: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-a,UID:981ffb92-435c-11ea-a994-fa163e34d433,ResourceVersion:19972180,Generation:0,CreationTimestamp:2020-01-30 12:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 30 12:33:05.814: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-a,UID:981ffb92-435c-11ea-a994-fa163e34d433,ResourceVersion:19972180,Generation:0,CreationTimestamp:2020-01-30 12:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 30 12:33:15.862: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-b,UID:b012db97-435c-11ea-a994-fa163e34d433,ResourceVersion:19972192,Generation:0,CreationTimestamp:2020-01-30 12:33:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 30 12:33:15.862: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-b,UID:b012db97-435c-11ea-a994-fa163e34d433,ResourceVersion:19972192,Generation:0,CreationTimestamp:2020-01-30 12:33:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 30 12:33:25.886: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-b,UID:b012db97-435c-11ea-a994-fa163e34d433,ResourceVersion:19972205,Generation:0,CreationTimestamp:2020-01-30 12:33:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 30 12:33:25.886: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mf8k9,SelfLink:/api/v1/namespaces/e2e-tests-watch-mf8k9/configmaps/e2e-watch-test-configmap-b,UID:b012db97-435c-11ea-a994-fa163e34d433,ResourceVersion:19972205,Generation:0,CreationTimestamp:2020-01-30 12:33:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:33:35.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-mf8k9" for this suite.
Jan 30 12:33:41.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:33:42.038: INFO: namespace: e2e-tests-watch-mf8k9, resource: bindings, ignored listing per whitelist
Jan 30 12:33:42.192: INFO: namespace e2e-tests-watch-mf8k9 deletion completed in 6.269134514s

• [SLOW TEST:66.714 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:33:42.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 30 12:33:42.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-mv55c" to be "success or failure"
Jan 30 12:33:42.560: INFO: Pod "downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.381857ms
Jan 30 12:33:44.977: INFO: Pod "downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.4450571s
Jan 30 12:33:46.995: INFO: Pod "downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.463803589s
Jan 30 12:33:49.022: INFO: Pod "downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.490508105s
Jan 30 12:33:51.034: INFO: Pod "downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.501949855s
Jan 30 12:33:53.675: INFO: Pod "downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.143259551s
STEP: Saw pod success
Jan 30 12:33:53.675: INFO: Pod "downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:33:54.101: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 30 12:33:54.237: INFO: Waiting for pod downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005 to disappear
Jan 30 12:33:54.251: INFO: Pod downwardapi-volume-bff2cb65-435c-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:33:54.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mv55c" for this suite.
Jan 30 12:34:00.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:34:00.544: INFO: namespace: e2e-tests-projected-mv55c, resource: bindings, ignored listing per whitelist
Jan 30 12:34:00.733: INFO: namespace e2e-tests-projected-mv55c deletion completed in 6.473492441s

• [SLOW TEST:18.541 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:34:00.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 30 12:34:01.152: INFO: Waiting up to 5m0s for pod "var-expansion-cb019a65-435c-11ea-a47a-0242ac110005" in namespace "e2e-tests-var-expansion-2ph8c" to be "success or failure"
Jan 30 12:34:01.213: INFO: Pod "var-expansion-cb019a65-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.218244ms
Jan 30 12:34:03.778: INFO: Pod "var-expansion-cb019a65-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.626096476s
Jan 30 12:34:05.810: INFO: Pod "var-expansion-cb019a65-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.658382598s
Jan 30 12:34:07.890: INFO: Pod "var-expansion-cb019a65-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.737772393s
Jan 30 12:34:10.155: INFO: Pod "var-expansion-cb019a65-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.002685001s
Jan 30 12:34:12.191: INFO: Pod "var-expansion-cb019a65-435c-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.038953235s
STEP: Saw pod success
Jan 30 12:34:12.191: INFO: Pod "var-expansion-cb019a65-435c-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:34:12.202: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-cb019a65-435c-11ea-a47a-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 30 12:34:12.437: INFO: Waiting for pod var-expansion-cb019a65-435c-11ea-a47a-0242ac110005 to disappear
Jan 30 12:34:12.456: INFO: Pod var-expansion-cb019a65-435c-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:34:12.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-2ph8c" for this suite.
Jan 30 12:34:18.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:34:18.875: INFO: namespace: e2e-tests-var-expansion-2ph8c, resource: bindings, ignored listing per whitelist
Jan 30 12:34:18.880: INFO: namespace e2e-tests-var-expansion-2ph8c deletion completed in 6.405479414s

• [SLOW TEST:18.146 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:34:18.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 30 12:34:19.093: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-fcmdj" to be "success or failure"
Jan 30 12:34:19.102: INFO: Pod "downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.348529ms
Jan 30 12:34:21.118: INFO: Pod "downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024985978s
Jan 30 12:34:23.137: INFO: Pod "downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043210726s
Jan 30 12:34:25.603: INFO: Pod "downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.509756325s
Jan 30 12:34:27.621: INFO: Pod "downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.527508315s
Jan 30 12:34:29.646: INFO: Pod "downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.553137973s
STEP: Saw pod success
Jan 30 12:34:29.647: INFO: Pod "downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:34:29.652: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 30 12:34:30.062: INFO: Waiting for pod downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005 to disappear
Jan 30 12:34:30.080: INFO: Pod downwardapi-volume-d5c6755b-435c-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:34:30.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fcmdj" for this suite.
Jan 30 12:34:36.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:34:36.346: INFO: namespace: e2e-tests-projected-fcmdj, resource: bindings, ignored listing per whitelist
Jan 30 12:34:36.358: INFO: namespace e2e-tests-projected-fcmdj deletion completed in 6.264094472s

• [SLOW TEST:17.478 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:34:36.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-flctx.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-flctx.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-flctx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-flctx.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-flctx.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-flctx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 12:34:52.785: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.790: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.799: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.805: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.810: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.817: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.830: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-flctx.svc.cluster.local from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.839: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.845: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.851: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.857: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.863: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.871: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.876: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.882: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.889: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.895: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-flctx.svc.cluster.local from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.903: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.914: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.929: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005: the server could not find the requested resource (get pods dns-test-e03a7dce-435c-11ea-a47a-0242ac110005)
Jan 30 12:34:52.929: INFO: Lookups using e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-flctx.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-flctx.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 30 12:34:58.076: INFO: DNS probes using e2e-tests-dns-flctx/dns-test-e03a7dce-435c-11ea-a47a-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:34:58.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-flctx" for this suite.
Jan 30 12:35:06.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:35:06.668: INFO: namespace: e2e-tests-dns-flctx, resource: bindings, ignored listing per whitelist
Jan 30 12:35:06.766: INFO: namespace e2e-tests-dns-flctx deletion completed in 8.519296016s

• [SLOW TEST:30.407 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:35:06.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xxn28
Jan 30 12:35:15.420: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xxn28
STEP: checking the pod's current state and verifying that restartCount is present
Jan 30 12:35:15.434: INFO: Initial restart count of pod liveness-http is 0
Jan 30 12:35:35.831: INFO: Restart count of pod e2e-tests-container-probe-xxn28/liveness-http is now 1 (20.396727749s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:35:35.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-xxn28" for this suite.
Jan 30 12:35:42.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:35:42.193: INFO: namespace: e2e-tests-container-probe-xxn28, resource: bindings, ignored listing per whitelist
Jan 30 12:35:42.219: INFO: namespace e2e-tests-container-probe-xxn28 deletion completed in 6.228886773s

• [SLOW TEST:35.453 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:35:42.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-079f21fb-435d-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 30 12:35:42.804: INFO: Waiting up to 5m0s for pod "pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-configmap-hj85c" to be "success or failure"
Jan 30 12:35:42.817: INFO: Pod "pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.125893ms
Jan 30 12:35:44.845: INFO: Pod "pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040772237s
Jan 30 12:35:46.958: INFO: Pod "pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153683805s
Jan 30 12:35:48.976: INFO: Pod "pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172225324s
Jan 30 12:35:51.007: INFO: Pod "pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203243282s
Jan 30 12:35:53.034: INFO: Pod "pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.230522009s
STEP: Saw pod success
Jan 30 12:35:53.035: INFO: Pod "pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:35:53.052: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 30 12:35:53.171: INFO: Waiting for pod pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:35:53.181: INFO: Pod pod-configmaps-07a24d44-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:35:53.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hj85c" for this suite.
Jan 30 12:35:59.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:35:59.317: INFO: namespace: e2e-tests-configmap-hj85c, resource: bindings, ignored listing per whitelist
Jan 30 12:35:59.480: INFO: namespace e2e-tests-configmap-hj85c deletion completed in 6.28736228s

• [SLOW TEST:17.260 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:35:59.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 30 12:36:10.342: INFO: Successfully updated pod "labelsupdate11c05789-435d-11ea-a47a-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:36:12.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gfsrs" for this suite.
Jan 30 12:36:36.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:36:36.659: INFO: namespace: e2e-tests-downward-api-gfsrs, resource: bindings, ignored listing per whitelist
Jan 30 12:36:36.710: INFO: namespace e2e-tests-downward-api-gfsrs deletion completed in 24.208142815s

• [SLOW TEST:37.230 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:36:36.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:36:36.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-kssdb" for this suite.
Jan 30 12:37:01.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:37:01.200: INFO: namespace: e2e-tests-pods-kssdb, resource: bindings, ignored listing per whitelist
Jan 30 12:37:01.266: INFO: namespace e2e-tests-pods-kssdb deletion completed in 24.336709547s

• [SLOW TEST:24.556 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:37:01.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 30 12:37:01.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-jdfjw" to be "success or failure"
Jan 30 12:37:01.601: INFO: Pod "downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.682368ms
Jan 30 12:37:03.620: INFO: Pod "downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026606365s
Jan 30 12:37:05.656: INFO: Pod "downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062223944s
Jan 30 12:37:07.676: INFO: Pod "downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082740286s
Jan 30 12:37:09.706: INFO: Pod "downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112472424s
Jan 30 12:37:12.452: INFO: Pod "downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.858690132s
STEP: Saw pod success
Jan 30 12:37:12.453: INFO: Pod "downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:37:12.470: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 30 12:37:12.764: INFO: Waiting for pod downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:37:12.773: INFO: Pod downwardapi-volume-36945ecd-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:37:12.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jdfjw" for this suite.
Jan 30 12:37:18.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:37:18.882: INFO: namespace: e2e-tests-projected-jdfjw, resource: bindings, ignored listing per whitelist
Jan 30 12:37:19.111: INFO: namespace e2e-tests-projected-jdfjw deletion completed in 6.32345407s

• [SLOW TEST:17.843 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:37:19.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 30 12:37:19.386: INFO: Waiting up to 5m0s for pod "downward-api-413c5f4c-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-s9lq9" to be "success or failure"
Jan 30 12:37:19.412: INFO: Pod "downward-api-413c5f4c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.465217ms
Jan 30 12:37:21.435: INFO: Pod "downward-api-413c5f4c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049000479s
Jan 30 12:37:23.461: INFO: Pod "downward-api-413c5f4c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074933204s
Jan 30 12:37:25.796: INFO: Pod "downward-api-413c5f4c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.409920778s
Jan 30 12:37:27.825: INFO: Pod "downward-api-413c5f4c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.438626091s
Jan 30 12:37:29.851: INFO: Pod "downward-api-413c5f4c-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.465069994s
STEP: Saw pod success
Jan 30 12:37:29.852: INFO: Pod "downward-api-413c5f4c-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:37:29.868: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-413c5f4c-435d-11ea-a47a-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 30 12:37:30.061: INFO: Waiting for pod downward-api-413c5f4c-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:37:30.078: INFO: Pod downward-api-413c5f4c-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:37:30.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-s9lq9" for this suite.
Jan 30 12:37:36.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:37:36.307: INFO: namespace: e2e-tests-downward-api-s9lq9, resource: bindings, ignored listing per whitelist
Jan 30 12:37:36.364: INFO: namespace e2e-tests-downward-api-s9lq9 deletion completed in 6.253409704s

• [SLOW TEST:17.252 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:37:36.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-4b88a757-435d-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 30 12:37:36.767: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b98326c-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-n8xvl" to be "success or failure"
Jan 30 12:37:36.785: INFO: Pod "pod-projected-secrets-4b98326c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.423487ms
Jan 30 12:37:38.799: INFO: Pod "pod-projected-secrets-4b98326c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03209738s
Jan 30 12:37:41.335: INFO: Pod "pod-projected-secrets-4b98326c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567948752s
Jan 30 12:37:43.350: INFO: Pod "pod-projected-secrets-4b98326c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582913521s
Jan 30 12:37:45.375: INFO: Pod "pod-projected-secrets-4b98326c-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.607933871s
STEP: Saw pod success
Jan 30 12:37:45.375: INFO: Pod "pod-projected-secrets-4b98326c-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:37:45.764: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4b98326c-435d-11ea-a47a-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 30 12:37:46.005: INFO: Waiting for pod pod-projected-secrets-4b98326c-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:37:46.019: INFO: Pod pod-projected-secrets-4b98326c-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:37:46.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n8xvl" for this suite.
Jan 30 12:37:52.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:37:52.125: INFO: namespace: e2e-tests-projected-n8xvl, resource: bindings, ignored listing per whitelist
Jan 30 12:37:52.241: INFO: namespace e2e-tests-projected-n8xvl deletion completed in 6.213103169s

• [SLOW TEST:15.877 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:37:52.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-54f43d36-435d-11ea-a47a-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:38:04.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wc4q7" for this suite.
Jan 30 12:38:28.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:38:28.795: INFO: namespace: e2e-tests-configmap-wc4q7, resource: bindings, ignored listing per whitelist
Jan 30 12:38:28.876: INFO: namespace e2e-tests-configmap-wc4q7 deletion completed in 24.20184222s

• [SLOW TEST:36.635 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:38:28.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-6ac32918-435d-11ea-a47a-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-6ac32906-435d-11ea-a47a-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 30 12:38:29.081: INFO: Waiting up to 5m0s for pod "projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-8zjtk" to be "success or failure"
Jan 30 12:38:29.171: INFO: Pod "projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 90.241348ms
Jan 30 12:38:31.202: INFO: Pod "projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12142295s
Jan 30 12:38:33.220: INFO: Pod "projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138538727s
Jan 30 12:38:35.249: INFO: Pod "projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167463208s
Jan 30 12:38:37.298: INFO: Pod "projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217439362s
Jan 30 12:38:39.310: INFO: Pod "projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.228704799s
STEP: Saw pod success
Jan 30 12:38:39.310: INFO: Pod "projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:38:39.337: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan 30 12:38:39.985: INFO: Waiting for pod projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:38:39.999: INFO: Pod projected-volume-6ac3275c-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:38:40.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8zjtk" for this suite.
Jan 30 12:38:46.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:38:46.696: INFO: namespace: e2e-tests-projected-8zjtk, resource: bindings, ignored listing per whitelist
Jan 30 12:38:46.741: INFO: namespace e2e-tests-projected-8zjtk deletion completed in 6.525072083s

• [SLOW TEST:17.865 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:38:46.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 30 12:38:46.924: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:39:03.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-p79k5" for this suite.
Jan 30 12:39:11.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:39:11.747: INFO: namespace: e2e-tests-init-container-p79k5, resource: bindings, ignored listing per whitelist
Jan 30 12:39:11.991: INFO: namespace e2e-tests-init-container-p79k5 deletion completed in 8.350823122s

• [SLOW TEST:25.250 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:39:11.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-84a9aeea-435d-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 30 12:39:12.610: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-84ae283e-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-jdtbl" to be "success or failure"
Jan 30 12:39:12.639: INFO: Pod "pod-projected-secrets-84ae283e-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.588191ms
Jan 30 12:39:14.671: INFO: Pod "pod-projected-secrets-84ae283e-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061088427s
Jan 30 12:39:16.724: INFO: Pod "pod-projected-secrets-84ae283e-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114344324s
Jan 30 12:39:18.766: INFO: Pod "pod-projected-secrets-84ae283e-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156633656s
Jan 30 12:39:20.789: INFO: Pod "pod-projected-secrets-84ae283e-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.178770708s
STEP: Saw pod success
Jan 30 12:39:20.789: INFO: Pod "pod-projected-secrets-84ae283e-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:39:20.796: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-84ae283e-435d-11ea-a47a-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 30 12:39:20.907: INFO: Waiting for pod pod-projected-secrets-84ae283e-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:39:20.958: INFO: Pod pod-projected-secrets-84ae283e-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:39:20.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jdtbl" for this suite.
Jan 30 12:39:27.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:39:27.148: INFO: namespace: e2e-tests-projected-jdtbl, resource: bindings, ignored listing per whitelist
Jan 30 12:39:27.184: INFO: namespace e2e-tests-projected-jdtbl deletion completed in 6.210054745s

• [SLOW TEST:15.192 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:39:27.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 30 12:39:27.352: INFO: Waiting up to 5m0s for pod "pod-8d830c52-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-dvsl2" to be "success or failure"
Jan 30 12:39:27.414: INFO: Pod "pod-8d830c52-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.688343ms
Jan 30 12:39:29.473: INFO: Pod "pod-8d830c52-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121005795s
Jan 30 12:39:31.491: INFO: Pod "pod-8d830c52-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139412349s
Jan 30 12:39:33.837: INFO: Pod "pod-8d830c52-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.484949235s
Jan 30 12:39:35.858: INFO: Pod "pod-8d830c52-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.506301783s
Jan 30 12:39:37.874: INFO: Pod "pod-8d830c52-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.52246044s
STEP: Saw pod success
Jan 30 12:39:37.875: INFO: Pod "pod-8d830c52-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:39:37.880: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8d830c52-435d-11ea-a47a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 30 12:39:38.685: INFO: Waiting for pod pod-8d830c52-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:39:38.971: INFO: Pod pod-8d830c52-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:39:38.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dvsl2" for this suite.
Jan 30 12:39:45.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:39:45.130: INFO: namespace: e2e-tests-emptydir-dvsl2, resource: bindings, ignored listing per whitelist
Jan 30 12:39:45.311: INFO: namespace e2e-tests-emptydir-dvsl2 deletion completed in 6.325460869s

• [SLOW TEST:18.128 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:39:45.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-vqh75
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-vqh75
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-vqh75
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-vqh75
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-vqh75
Jan 30 12:39:59.936: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vqh75, name: ss-0, uid: 9d12c319-435d-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan 30 12:40:02.485: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vqh75, name: ss-0, uid: 9d12c319-435d-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 30 12:40:02.622: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vqh75, name: ss-0, uid: 9d12c319-435d-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 30 12:40:02.647: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-vqh75
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-vqh75
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-vqh75 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 30 12:40:15.737: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vqh75
Jan 30 12:40:15.745: INFO: Scaling statefulset ss to 0
Jan 30 12:40:35.846: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 12:40:35.859: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:40:35.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-vqh75" for this suite.
Jan 30 12:40:41.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:40:42.109: INFO: namespace: e2e-tests-statefulset-vqh75, resource: bindings, ignored listing per whitelist
Jan 30 12:40:42.208: INFO: namespace e2e-tests-statefulset-vqh75 deletion completed in 6.287714133s

• [SLOW TEST:56.896 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:40:42.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-ba435007-435d-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 30 12:40:42.523: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ba443be6-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-9w6mk" to be "success or failure"
Jan 30 12:40:42.542: INFO: Pod "pod-projected-configmaps-ba443be6-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.206681ms
Jan 30 12:40:44.572: INFO: Pod "pod-projected-configmaps-ba443be6-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047946878s
Jan 30 12:40:46.612: INFO: Pod "pod-projected-configmaps-ba443be6-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088187847s
Jan 30 12:40:48.715: INFO: Pod "pod-projected-configmaps-ba443be6-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19127439s
Jan 30 12:40:50.730: INFO: Pod "pod-projected-configmaps-ba443be6-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.206492332s
STEP: Saw pod success
Jan 30 12:40:50.730: INFO: Pod "pod-projected-configmaps-ba443be6-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:40:50.738: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ba443be6-435d-11ea-a47a-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 12:40:50.892: INFO: Waiting for pod pod-projected-configmaps-ba443be6-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:40:50.908: INFO: Pod pod-projected-configmaps-ba443be6-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:40:50.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9w6mk" for this suite.
Jan 30 12:40:57.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:40:57.113: INFO: namespace: e2e-tests-projected-9w6mk, resource: bindings, ignored listing per whitelist
Jan 30 12:40:57.373: INFO: namespace e2e-tests-projected-9w6mk deletion completed in 6.450765009s

• [SLOW TEST:15.165 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:40:57.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-c35a165e-435d-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 30 12:40:57.753: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-fngmh" to be "success or failure"
Jan 30 12:40:57.766: INFO: Pod "pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.538548ms
Jan 30 12:41:00.058: INFO: Pod "pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305162675s
Jan 30 12:41:02.077: INFO: Pod "pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323809258s
Jan 30 12:41:04.142: INFO: Pod "pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38835118s
Jan 30 12:41:06.170: INFO: Pod "pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416943279s
Jan 30 12:41:08.232: INFO: Pod "pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.479280055s
STEP: Saw pod success
Jan 30 12:41:08.233: INFO: Pod "pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:41:08.293: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 12:41:08.407: INFO: Waiting for pod pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:41:08.416: INFO: Pod pod-projected-configmaps-c35b463b-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:41:08.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fngmh" for this suite.
Jan 30 12:41:16.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:41:16.629: INFO: namespace: e2e-tests-projected-fngmh, resource: bindings, ignored listing per whitelist
Jan 30 12:41:16.684: INFO: namespace e2e-tests-projected-fngmh deletion completed in 8.257067528s

• [SLOW TEST:19.310 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:41:16.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 30 12:41:16.903: INFO: Waiting up to 5m0s for pod "pod-cece5fcf-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-emptydir-ff2dk" to be "success or failure"
Jan 30 12:41:16.923: INFO: Pod "pod-cece5fcf-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.181363ms
Jan 30 12:41:18.936: INFO: Pod "pod-cece5fcf-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033322928s
Jan 30 12:41:20.970: INFO: Pod "pod-cece5fcf-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067580498s
Jan 30 12:41:23.115: INFO: Pod "pod-cece5fcf-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212018266s
Jan 30 12:41:25.140: INFO: Pod "pod-cece5fcf-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.2369732s
STEP: Saw pod success
Jan 30 12:41:25.140: INFO: Pod "pod-cece5fcf-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:41:25.151: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-cece5fcf-435d-11ea-a47a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 30 12:41:25.246: INFO: Waiting for pod pod-cece5fcf-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:41:25.336: INFO: Pod pod-cece5fcf-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:41:25.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ff2dk" for this suite.
Jan 30 12:41:31.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:41:31.560: INFO: namespace: e2e-tests-emptydir-ff2dk, resource: bindings, ignored listing per whitelist
Jan 30 12:41:31.580: INFO: namespace e2e-tests-emptydir-ff2dk deletion completed in 6.231402705s

• [SLOW TEST:14.896 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:41:31.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan 30 12:41:31.802: INFO: Waiting up to 5m0s for pod "var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005" in namespace "e2e-tests-var-expansion-qs6rw" to be "success or failure"
Jan 30 12:41:31.826: INFO: Pod "var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.003875ms
Jan 30 12:41:33.842: INFO: Pod "var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039977014s
Jan 30 12:41:35.869: INFO: Pod "var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066681267s
Jan 30 12:41:38.046: INFO: Pod "var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.244140504s
Jan 30 12:41:40.393: INFO: Pod "var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.590837766s
Jan 30 12:41:42.410: INFO: Pod "var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.608385327s
STEP: Saw pod success
Jan 30 12:41:42.411: INFO: Pod "var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:41:42.415: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 30 12:41:42.803: INFO: Waiting for pod var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005 to disappear
Jan 30 12:41:42.824: INFO: Pod var-expansion-d7ad5c1b-435d-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:41:42.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-qs6rw" for this suite.
Jan 30 12:41:48.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:41:48.962: INFO: namespace: e2e-tests-var-expansion-qs6rw, resource: bindings, ignored listing per whitelist
Jan 30 12:41:49.049: INFO: namespace e2e-tests-var-expansion-qs6rw deletion completed in 6.211817277s

• [SLOW TEST:17.469 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:41:49.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:42:49.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-c2kcp" for this suite.
Jan 30 12:43:13.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:43:14.099: INFO: namespace: e2e-tests-container-probe-c2kcp, resource: bindings, ignored listing per whitelist
Jan 30 12:43:14.138: INFO: namespace e2e-tests-container-probe-c2kcp deletion completed in 24.304777142s

• [SLOW TEST:85.089 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:43:14.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-14d68eb5-435e-11ea-a47a-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-14d68fc4-435e-11ea-a47a-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-14d68eb5-435e-11ea-a47a-0242ac110005
STEP: Updating configmap cm-test-opt-upd-14d68fc4-435e-11ea-a47a-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-14d6900b-435e-11ea-a47a-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:43:29.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t2xt5" for this suite.
Jan 30 12:43:53.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:43:53.524: INFO: namespace: e2e-tests-projected-t2xt5, resource: bindings, ignored listing per whitelist
Jan 30 12:43:53.537: INFO: namespace e2e-tests-projected-t2xt5 deletion completed in 24.303347728s

• [SLOW TEST:39.398 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:43:53.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-zwwd
STEP: Creating a pod to test atomic-volume-subpath
Jan 30 12:43:53.933: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zwwd" in namespace "e2e-tests-subpath-sjv52" to be "success or failure"
Jan 30 12:43:53.981: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Pending", Reason="", readiness=false. Elapsed: 48.014336ms
Jan 30 12:43:56.018: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084174802s
Jan 30 12:43:58.039: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105857927s
Jan 30 12:44:00.059: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125173859s
Jan 30 12:44:02.110: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176348197s
Jan 30 12:44:04.393: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.459259045s
Jan 30 12:44:06.494: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.560476316s
Jan 30 12:44:08.530: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Running", Reason="", readiness=false. Elapsed: 14.596629951s
Jan 30 12:44:10.587: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Running", Reason="", readiness=false. Elapsed: 16.653916364s
Jan 30 12:44:12.604: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Running", Reason="", readiness=false. Elapsed: 18.670573306s
Jan 30 12:44:14.618: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Running", Reason="", readiness=false. Elapsed: 20.684054397s
Jan 30 12:44:16.633: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Running", Reason="", readiness=false. Elapsed: 22.699634645s
Jan 30 12:44:18.740: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Running", Reason="", readiness=false. Elapsed: 24.806462519s
Jan 30 12:44:20.763: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Running", Reason="", readiness=false. Elapsed: 26.829117235s
Jan 30 12:44:22.776: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Running", Reason="", readiness=false. Elapsed: 28.843022759s
Jan 30 12:44:24.795: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Running", Reason="", readiness=false. Elapsed: 30.86199877s
Jan 30 12:44:26.841: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Running", Reason="", readiness=false. Elapsed: 32.907252056s
Jan 30 12:44:28.901: INFO: Pod "pod-subpath-test-downwardapi-zwwd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.967431248s
STEP: Saw pod success
Jan 30 12:44:28.901: INFO: Pod "pod-subpath-test-downwardapi-zwwd" satisfied condition "success or failure"
Jan 30 12:44:28.985: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-zwwd container test-container-subpath-downwardapi-zwwd: 
STEP: delete the pod
Jan 30 12:44:29.064: INFO: Waiting for pod pod-subpath-test-downwardapi-zwwd to disappear
Jan 30 12:44:29.073: INFO: Pod pod-subpath-test-downwardapi-zwwd no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-zwwd
Jan 30 12:44:29.074: INFO: Deleting pod "pod-subpath-test-downwardapi-zwwd" in namespace "e2e-tests-subpath-sjv52"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:44:29.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-sjv52" for this suite.
Jan 30 12:44:35.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:44:35.323: INFO: namespace: e2e-tests-subpath-sjv52, resource: bindings, ignored listing per whitelist
Jan 30 12:44:35.371: INFO: namespace e2e-tests-subpath-sjv52 deletion completed in 6.222357315s

• [SLOW TEST:41.834 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:44:35.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-77pqs
Jan 30 12:44:47.766: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-77pqs
STEP: checking the pod's current state and verifying that restartCount is present
Jan 30 12:44:47.770: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:48:49.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-77pqs" for this suite.
Jan 30 12:48:55.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:48:55.981: INFO: namespace: e2e-tests-container-probe-77pqs, resource: bindings, ignored listing per whitelist
Jan 30 12:48:56.042: INFO: namespace e2e-tests-container-probe-77pqs deletion completed in 6.267806903s

• [SLOW TEST:260.670 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:48:56.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-msb2w/configmap-test-e09d4a30-435e-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 30 12:48:56.345: INFO: Waiting up to 5m0s for pod "pod-configmaps-e09e1462-435e-11ea-a47a-0242ac110005" in namespace "e2e-tests-configmap-msb2w" to be "success or failure"
Jan 30 12:48:56.364: INFO: Pod "pod-configmaps-e09e1462-435e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.304315ms
Jan 30 12:48:58.380: INFO: Pod "pod-configmaps-e09e1462-435e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034332639s
Jan 30 12:49:00.412: INFO: Pod "pod-configmaps-e09e1462-435e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066380513s
Jan 30 12:49:02.432: INFO: Pod "pod-configmaps-e09e1462-435e-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086328819s
Jan 30 12:49:04.449: INFO: Pod "pod-configmaps-e09e1462-435e-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10420775s
STEP: Saw pod success
Jan 30 12:49:04.450: INFO: Pod "pod-configmaps-e09e1462-435e-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:49:04.458: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e09e1462-435e-11ea-a47a-0242ac110005 container env-test: 
STEP: delete the pod
Jan 30 12:49:04.575: INFO: Waiting for pod pod-configmaps-e09e1462-435e-11ea-a47a-0242ac110005 to disappear
Jan 30 12:49:04.667: INFO: Pod pod-configmaps-e09e1462-435e-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:49:04.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-msb2w" for this suite.
Jan 30 12:49:10.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:49:10.891: INFO: namespace: e2e-tests-configmap-msb2w, resource: bindings, ignored listing per whitelist
Jan 30 12:49:10.911: INFO: namespace e2e-tests-configmap-msb2w deletion completed in 6.208198569s

• [SLOW TEST:14.868 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:49:10.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan 30 12:49:11.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:13.472: INFO: stderr: ""
Jan 30 12:49:13.472: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 12:49:13.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:13.656: INFO: stderr: ""
Jan 30 12:49:13.657: INFO: stdout: "update-demo-nautilus-852g4 update-demo-nautilus-btqtd "
Jan 30 12:49:13.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-852g4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:13.927: INFO: stderr: ""
Jan 30 12:49:13.928: INFO: stdout: ""
Jan 30 12:49:13.928: INFO: update-demo-nautilus-852g4 is created but not running
Jan 30 12:49:18.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:19.111: INFO: stderr: ""
Jan 30 12:49:19.111: INFO: stdout: "update-demo-nautilus-852g4 update-demo-nautilus-btqtd "
Jan 30 12:49:19.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-852g4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:19.213: INFO: stderr: ""
Jan 30 12:49:19.213: INFO: stdout: ""
Jan 30 12:49:19.213: INFO: update-demo-nautilus-852g4 is created but not running
Jan 30 12:49:24.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:24.356: INFO: stderr: ""
Jan 30 12:49:24.356: INFO: stdout: "update-demo-nautilus-852g4 update-demo-nautilus-btqtd "
Jan 30 12:49:24.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-852g4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:24.471: INFO: stderr: ""
Jan 30 12:49:24.471: INFO: stdout: "true"
Jan 30 12:49:24.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-852g4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:24.726: INFO: stderr: ""
Jan 30 12:49:24.726: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 12:49:24.726: INFO: validating pod update-demo-nautilus-852g4
Jan 30 12:49:24.774: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 12:49:24.774: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 12:49:24.774: INFO: update-demo-nautilus-852g4 is verified up and running
Jan 30 12:49:24.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-btqtd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:24.901: INFO: stderr: ""
Jan 30 12:49:24.901: INFO: stdout: "true"
Jan 30 12:49:24.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-btqtd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:25.032: INFO: stderr: ""
Jan 30 12:49:25.032: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 12:49:25.032: INFO: validating pod update-demo-nautilus-btqtd
Jan 30 12:49:25.044: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 12:49:25.044: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 12:49:25.044: INFO: update-demo-nautilus-btqtd is verified up and running
STEP: rolling-update to new replication controller
Jan 30 12:49:25.057: INFO: scanned /root for discovery docs: 
Jan 30 12:49:25.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:59.105: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 30 12:49:59.106: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 12:49:59.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:59.298: INFO: stderr: ""
Jan 30 12:49:59.298: INFO: stdout: "update-demo-kitten-hzgjj update-demo-kitten-ttmpn "
Jan 30 12:49:59.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hzgjj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:59.492: INFO: stderr: ""
Jan 30 12:49:59.492: INFO: stdout: "true"
Jan 30 12:49:59.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hzgjj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:59.609: INFO: stderr: ""
Jan 30 12:49:59.609: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 30 12:49:59.609: INFO: validating pod update-demo-kitten-hzgjj
Jan 30 12:49:59.632: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 30 12:49:59.632: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 30 12:49:59.632: INFO: update-demo-kitten-hzgjj is verified up and running
Jan 30 12:49:59.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ttmpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:59.772: INFO: stderr: ""
Jan 30 12:49:59.772: INFO: stdout: "true"
Jan 30 12:49:59.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ttmpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzlck'
Jan 30 12:49:59.890: INFO: stderr: ""
Jan 30 12:49:59.890: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 30 12:49:59.890: INFO: validating pod update-demo-kitten-ttmpn
Jan 30 12:49:59.899: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 30 12:49:59.899: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 30 12:49:59.899: INFO: update-demo-kitten-ttmpn is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:49:59.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vzlck" for this suite.
Jan 30 12:50:24.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:50:24.206: INFO: namespace: e2e-tests-kubectl-vzlck, resource: bindings, ignored listing per whitelist
Jan 30 12:50:24.220: INFO: namespace e2e-tests-kubectl-vzlck deletion completed in 24.314696745s

• [SLOW TEST:73.308 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:50:24.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-152781b7-435f-11ea-a47a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 30 12:50:24.453: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-152847fc-435f-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-wt58h" to be "success or failure"
Jan 30 12:50:24.496: INFO: Pod "pod-projected-configmaps-152847fc-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.792254ms
Jan 30 12:50:26.543: INFO: Pod "pod-projected-configmaps-152847fc-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089887271s
Jan 30 12:50:28.599: INFO: Pod "pod-projected-configmaps-152847fc-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145734889s
Jan 30 12:50:30.679: INFO: Pod "pod-projected-configmaps-152847fc-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226291294s
Jan 30 12:50:32.704: INFO: Pod "pod-projected-configmaps-152847fc-435f-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.250512727s
STEP: Saw pod success
Jan 30 12:50:32.704: INFO: Pod "pod-projected-configmaps-152847fc-435f-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:50:32.718: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-152847fc-435f-11ea-a47a-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 12:50:32.915: INFO: Waiting for pod pod-projected-configmaps-152847fc-435f-11ea-a47a-0242ac110005 to disappear
Jan 30 12:50:33.011: INFO: Pod pod-projected-configmaps-152847fc-435f-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:50:33.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wt58h" for this suite.
Jan 30 12:50:39.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:50:39.252: INFO: namespace: e2e-tests-projected-wt58h, resource: bindings, ignored listing per whitelist
Jan 30 12:50:39.354: INFO: namespace e2e-tests-projected-wt58h deletion completed in 6.302296407s

• [SLOW TEST:15.135 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:50:39.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 30 12:50:39.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-lx2ft'
Jan 30 12:50:39.782: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 30 12:50:39.782: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 30 12:50:39.805: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 30 12:50:39.847: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 30 12:50:39.939: INFO: scanned /root for discovery docs: 
Jan 30 12:50:39.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-lx2ft'
Jan 30 12:51:07.736: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 30 12:51:07.737: INFO: stdout: "Created e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0\nScaling up e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 30 12:51:07.737: INFO: stdout: "Created e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0\nScaling up e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 30 12:51:07.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lx2ft'
Jan 30 12:51:08.025: INFO: stderr: ""
Jan 30 12:51:08.026: INFO: stdout: "e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0-nm9zb e2e-test-nginx-rc-tn6mq "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 30 12:51:13.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lx2ft'
Jan 30 12:51:13.245: INFO: stderr: ""
Jan 30 12:51:13.245: INFO: stdout: "e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0-nm9zb "
Jan 30 12:51:13.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0-nm9zb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lx2ft'
Jan 30 12:51:13.337: INFO: stderr: ""
Jan 30 12:51:13.337: INFO: stdout: "true"
Jan 30 12:51:13.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0-nm9zb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lx2ft'
Jan 30 12:51:13.460: INFO: stderr: ""
Jan 30 12:51:13.460: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 30 12:51:13.460: INFO: e2e-test-nginx-rc-41cf6720c4b4f706530edf4d7a7af7b0-nm9zb is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 30 12:51:13.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lx2ft'
Jan 30 12:51:13.642: INFO: stderr: ""
Jan 30 12:51:13.643: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:51:13.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lx2ft" for this suite.
Jan 30 12:51:21.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:51:21.898: INFO: namespace: e2e-tests-kubectl-lx2ft, resource: bindings, ignored listing per whitelist
Jan 30 12:51:21.966: INFO: namespace e2e-tests-kubectl-lx2ft deletion completed in 8.307255215s

• [SLOW TEST:42.611 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:51:21.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 30 12:51:22.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005" in namespace "e2e-tests-projected-tgx8q" to be "success or failure"
Jan 30 12:51:22.330: INFO: Pod "downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317828ms
Jan 30 12:51:24.371: INFO: Pod "downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049780061s
Jan 30 12:51:26.382: INFO: Pod "downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060345559s
Jan 30 12:51:28.409: INFO: Pod "downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087804816s
Jan 30 12:51:30.441: INFO: Pod "downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119007591s
Jan 30 12:51:32.463: INFO: Pod "downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.141139219s
STEP: Saw pod success
Jan 30 12:51:32.463: INFO: Pod "downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:51:32.473: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 30 12:51:32.640: INFO: Waiting for pod downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005 to disappear
Jan 30 12:51:33.970: INFO: Pod downwardapi-volume-37aa504e-435f-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:51:33.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tgx8q" for this suite.
Jan 30 12:51:40.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:51:40.631: INFO: namespace: e2e-tests-projected-tgx8q, resource: bindings, ignored listing per whitelist
Jan 30 12:51:40.713: INFO: namespace e2e-tests-projected-tgx8q deletion completed in 6.705426138s

• [SLOW TEST:18.747 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:51:40.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 30 12:51:40.834: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:51:42.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-gqbdd" for this suite.
Jan 30 12:51:48.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:51:48.309: INFO: namespace: e2e-tests-custom-resource-definition-gqbdd, resource: bindings, ignored listing per whitelist
Jan 30 12:51:48.423: INFO: namespace e2e-tests-custom-resource-definition-gqbdd deletion completed in 6.377657071s

• [SLOW TEST:7.709 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:51:48.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 30 12:52:06.926: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:06.935: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:08.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:08.986: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:10.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:10.955: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:12.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:12.956: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:14.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:14.957: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:16.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:16.954: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:18.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:18.956: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:20.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:20.946: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:22.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:23.038: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:24.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:24.962: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:26.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:26.966: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:28.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:29.223: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:30.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:30.992: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 12:52:32.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 12:52:33.025: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:52:33.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-stxkr" for this suite.
Jan 30 12:52:57.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:52:57.305: INFO: namespace: e2e-tests-container-lifecycle-hook-stxkr, resource: bindings, ignored listing per whitelist
Jan 30 12:52:57.328: INFO: namespace e2e-tests-container-lifecycle-hook-stxkr deletion completed in 24.235087884s

• [SLOW TEST:68.904 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:52:57.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan 30 12:52:57.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-75nrq run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 30 12:53:07.898: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0130 12:53:06.147629    3380 log.go:172] (0xc000138a50) (0xc0007083c0) Create stream\nI0130 12:53:06.147970    3380 log.go:172] (0xc000138a50) (0xc0007083c0) Stream added, broadcasting: 1\nI0130 12:53:06.155318    3380 log.go:172] (0xc000138a50) Reply frame received for 1\nI0130 12:53:06.155370    3380 log.go:172] (0xc000138a50) (0xc000685c20) Create stream\nI0130 12:53:06.155391    3380 log.go:172] (0xc000138a50) (0xc000685c20) Stream added, broadcasting: 3\nI0130 12:53:06.156732    3380 log.go:172] (0xc000138a50) Reply frame received for 3\nI0130 12:53:06.156783    3380 log.go:172] (0xc000138a50) (0xc0008f2000) Create stream\nI0130 12:53:06.156802    3380 log.go:172] (0xc000138a50) (0xc0008f2000) Stream added, broadcasting: 5\nI0130 12:53:06.157934    3380 log.go:172] (0xc000138a50) Reply frame received for 5\nI0130 12:53:06.157977    3380 log.go:172] (0xc000138a50) (0xc000708460) Create stream\nI0130 12:53:06.157989    3380 log.go:172] (0xc000138a50) (0xc000708460) Stream added, broadcasting: 7\nI0130 12:53:06.161396    3380 log.go:172] (0xc000138a50) Reply frame received for 7\nI0130 12:53:06.161921    3380 log.go:172] (0xc000685c20) (3) Writing data frame\nI0130 12:53:06.162155    3380 log.go:172] (0xc000685c20) (3) Writing data frame\nI0130 12:53:06.169935    3380 log.go:172] (0xc000138a50) Data frame received for 5\nI0130 12:53:06.169962    3380 log.go:172] (0xc0008f2000) (5) Data frame handling\nI0130 12:53:06.170009    3380 log.go:172] (0xc0008f2000) (5) Data frame sent\nI0130 12:53:06.180660    3380 log.go:172] (0xc000138a50) Data frame received for 5\nI0130 12:53:06.180691    3380 log.go:172] (0xc0008f2000) (5) Data frame handling\nI0130 12:53:06.180716    3380 log.go:172] (0xc0008f2000) (5) Data frame sent\nI0130 12:53:07.501623    3380 log.go:172] (0xc000138a50) Data frame received for 1\nI0130 12:53:07.501850    3380 log.go:172] (0xc000138a50) (0xc000708460) Stream removed, broadcasting: 7\nI0130 12:53:07.501943    3380 log.go:172] (0xc0007083c0) (1) Data frame handling\nI0130 12:53:07.501981    3380 log.go:172] (0xc0007083c0) (1) Data frame sent\nI0130 12:53:07.502064    3380 log.go:172] (0xc000138a50) (0xc0008f2000) Stream removed, broadcasting: 5\nI0130 12:53:07.502131    3380 log.go:172] (0xc000138a50) (0xc0007083c0) Stream removed, broadcasting: 1\nI0130 12:53:07.502211    3380 log.go:172] (0xc000138a50) (0xc000685c20) Stream removed, broadcasting: 3\nI0130 12:53:07.502267    3380 log.go:172] (0xc000138a50) Go away received\nI0130 12:53:07.502590    3380 log.go:172] (0xc000138a50) (0xc0007083c0) Stream removed, broadcasting: 1\nI0130 12:53:07.502608    3380 log.go:172] (0xc000138a50) (0xc000685c20) Stream removed, broadcasting: 3\nI0130 12:53:07.502618    3380 log.go:172] (0xc000138a50) (0xc0008f2000) Stream removed, broadcasting: 5\nI0130 12:53:07.502627    3380 log.go:172] (0xc000138a50) (0xc000708460) Stream removed, broadcasting: 7\n"
Jan 30 12:53:07.898: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:53:09.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-75nrq" for this suite.
Jan 30 12:53:15.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:53:16.145: INFO: namespace: e2e-tests-kubectl-75nrq, resource: bindings, ignored listing per whitelist
Jan 30 12:53:16.169: INFO: namespace e2e-tests-kubectl-75nrq deletion completed in 6.235153093s

• [SLOW TEST:18.840 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:53:16.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 30 12:53:16.401: INFO: Waiting up to 5m0s for pod "var-expansion-7ba9b3e0-435f-11ea-a47a-0242ac110005" in namespace "e2e-tests-var-expansion-zkzfj" to be "success or failure"
Jan 30 12:53:16.491: INFO: Pod "var-expansion-7ba9b3e0-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 90.30156ms
Jan 30 12:53:18.523: INFO: Pod "var-expansion-7ba9b3e0-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121962796s
Jan 30 12:53:20.545: INFO: Pod "var-expansion-7ba9b3e0-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144155297s
Jan 30 12:53:22.586: INFO: Pod "var-expansion-7ba9b3e0-435f-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184480692s
Jan 30 12:53:24.625: INFO: Pod "var-expansion-7ba9b3e0-435f-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.223557434s
STEP: Saw pod success
Jan 30 12:53:24.625: INFO: Pod "var-expansion-7ba9b3e0-435f-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 12:53:24.639: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-7ba9b3e0-435f-11ea-a47a-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 30 12:53:24.796: INFO: Waiting for pod var-expansion-7ba9b3e0-435f-11ea-a47a-0242ac110005 to disappear
Jan 30 12:53:24.806: INFO: Pod var-expansion-7ba9b3e0-435f-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:53:24.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-zkzfj" for this suite.
Jan 30 12:53:31.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:53:31.187: INFO: namespace: e2e-tests-var-expansion-zkzfj, resource: bindings, ignored listing per whitelist
Jan 30 12:53:31.207: INFO: namespace e2e-tests-var-expansion-zkzfj deletion completed in 6.36633033s

• [SLOW TEST:15.038 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:53:31.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 30 12:53:31.419: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.975158ms)
Jan 30 12:53:31.424: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.721827ms)
Jan 30 12:53:31.429: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.222531ms)
Jan 30 12:53:31.434: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.758128ms)
Jan 30 12:53:31.439: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.173889ms)
Jan 30 12:53:31.493: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 53.724625ms)
Jan 30 12:53:31.502: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.664542ms)
Jan 30 12:53:31.510: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.884466ms)
Jan 30 12:53:31.518: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.462396ms)
Jan 30 12:53:31.530: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.843946ms)
Jan 30 12:53:31.537: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.27199ms)
Jan 30 12:53:31.545: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.849722ms)
Jan 30 12:53:31.553: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.230328ms)
Jan 30 12:53:31.561: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.747015ms)
Jan 30 12:53:31.569: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.195474ms)
Jan 30 12:53:31.576: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.220356ms)
Jan 30 12:53:31.583: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.842989ms)
Jan 30 12:53:31.592: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.721919ms)
Jan 30 12:53:31.599: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.731606ms)
Jan 30 12:53:31.605: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.199654ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:53:31.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-kz8wv" for this suite.
Jan 30 12:53:37.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:53:37.877: INFO: namespace: e2e-tests-proxy-kz8wv, resource: bindings, ignored listing per whitelist
Jan 30 12:53:37.992: INFO: namespace e2e-tests-proxy-kz8wv deletion completed in 6.380676604s

• [SLOW TEST:6.783 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:53:37.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-8tshv
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-8tshv
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-8tshv
Jan 30 12:53:38.348: INFO: Found 0 stateful pods, waiting for 1
Jan 30 12:53:48.374: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 30 12:53:48.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8tshv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 12:53:49.061: INFO: stderr: "I0130 12:53:48.742505    3406 log.go:172] (0xc0006fc370) (0xc0007b72c0) Create stream\nI0130 12:53:48.742911    3406 log.go:172] (0xc0006fc370) (0xc0007b72c0) Stream added, broadcasting: 1\nI0130 12:53:48.749865    3406 log.go:172] (0xc0006fc370) Reply frame received for 1\nI0130 12:53:48.749936    3406 log.go:172] (0xc0006fc370) (0xc000738000) Create stream\nI0130 12:53:48.749958    3406 log.go:172] (0xc0006fc370) (0xc000738000) Stream added, broadcasting: 3\nI0130 12:53:48.750923    3406 log.go:172] (0xc0006fc370) Reply frame received for 3\nI0130 12:53:48.750953    3406 log.go:172] (0xc0006fc370) (0xc000738140) Create stream\nI0130 12:53:48.750965    3406 log.go:172] (0xc0006fc370) (0xc000738140) Stream added, broadcasting: 5\nI0130 12:53:48.752675    3406 log.go:172] (0xc0006fc370) Reply frame received for 5\nI0130 12:53:48.919293    3406 log.go:172] (0xc0006fc370) Data frame received for 3\nI0130 12:53:48.919366    3406 log.go:172] (0xc000738000) (3) Data frame handling\nI0130 12:53:48.919387    3406 log.go:172] (0xc000738000) (3) Data frame sent\nI0130 12:53:49.043088    3406 log.go:172] (0xc0006fc370) Data frame received for 1\nI0130 12:53:49.043396    3406 log.go:172] (0xc0006fc370) (0xc000738140) Stream removed, broadcasting: 5\nI0130 12:53:49.043626    3406 log.go:172] (0xc0006fc370) (0xc000738000) Stream removed, broadcasting: 3\nI0130 12:53:49.043672    3406 log.go:172] (0xc0007b72c0) (1) Data frame handling\nI0130 12:53:49.043729    3406 log.go:172] (0xc0007b72c0) (1) Data frame sent\nI0130 12:53:49.043755    3406 log.go:172] (0xc0006fc370) (0xc0007b72c0) Stream removed, broadcasting: 1\nI0130 12:53:49.043789    3406 log.go:172] (0xc0006fc370) Go away received\nI0130 12:53:49.044808    3406 log.go:172] (0xc0006fc370) (0xc0007b72c0) Stream removed, broadcasting: 1\nI0130 12:53:49.045131    3406 log.go:172] (0xc0006fc370) (0xc000738000) Stream removed, broadcasting: 3\nI0130 12:53:49.045186    3406 log.go:172] (0xc0006fc370) (0xc000738140) Stream removed, broadcasting: 5\n"
Jan 30 12:53:49.062: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 12:53:49.062: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 12:53:49.080: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 30 12:53:59.101: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 12:53:59.101: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 12:53:59.212: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998262s
Jan 30 12:54:00.237: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.984216139s
Jan 30 12:54:01.268: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.958985285s
Jan 30 12:54:02.291: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.92870444s
Jan 30 12:54:03.312: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.905672249s
Jan 30 12:54:04.394: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.882529808s
Jan 30 12:54:05.421: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.802865134s
Jan 30 12:54:06.443: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.775620626s
Jan 30 12:54:07.464: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.752886397s
Jan 30 12:54:08.513: INFO: Verifying statefulset ss doesn't scale past 1 for another 732.140294ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-8tshv
Jan 30 12:54:09.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8tshv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:54:10.254: INFO: stderr: "I0130 12:54:09.854505    3428 log.go:172] (0xc000138630) (0xc000722640) Create stream\nI0130 12:54:09.855320    3428 log.go:172] (0xc000138630) (0xc000722640) Stream added, broadcasting: 1\nI0130 12:54:09.871117    3428 log.go:172] (0xc000138630) Reply frame received for 1\nI0130 12:54:09.871307    3428 log.go:172] (0xc000138630) (0xc0007226e0) Create stream\nI0130 12:54:09.871354    3428 log.go:172] (0xc000138630) (0xc0007226e0) Stream added, broadcasting: 3\nI0130 12:54:09.874407    3428 log.go:172] (0xc000138630) Reply frame received for 3\nI0130 12:54:09.874488    3428 log.go:172] (0xc000138630) (0xc0005d6e60) Create stream\nI0130 12:54:09.874514    3428 log.go:172] (0xc000138630) (0xc0005d6e60) Stream added, broadcasting: 5\nI0130 12:54:09.889819    3428 log.go:172] (0xc000138630) Reply frame received for 5\nI0130 12:54:10.071277    3428 log.go:172] (0xc000138630) Data frame received for 3\nI0130 12:54:10.071435    3428 log.go:172] (0xc0007226e0) (3) Data frame handling\nI0130 12:54:10.071500    3428 log.go:172] (0xc0007226e0) (3) Data frame sent\nI0130 12:54:10.241006    3428 log.go:172] (0xc000138630) Data frame received for 1\nI0130 12:54:10.241181    3428 log.go:172] (0xc000138630) (0xc0007226e0) Stream removed, broadcasting: 3\nI0130 12:54:10.241261    3428 log.go:172] (0xc000722640) (1) Data frame handling\nI0130 12:54:10.241308    3428 log.go:172] (0xc000722640) (1) Data frame sent\nI0130 12:54:10.241325    3428 log.go:172] (0xc000138630) (0xc000722640) Stream removed, broadcasting: 1\nI0130 12:54:10.241529    3428 log.go:172] (0xc000138630) (0xc0005d6e60) Stream removed, broadcasting: 5\nI0130 12:54:10.241724    3428 log.go:172] (0xc000138630) Go away received\nI0130 12:54:10.242479    3428 log.go:172] (0xc000138630) (0xc000722640) Stream removed, broadcasting: 1\nI0130 12:54:10.242513    3428 log.go:172] (0xc000138630) (0xc0007226e0) Stream removed, broadcasting: 3\nI0130 12:54:10.242527    3428 log.go:172] (0xc000138630) (0xc0005d6e60) Stream removed, broadcasting: 5\n"
Jan 30 12:54:10.255: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 12:54:10.255: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 12:54:10.269: INFO: Found 1 stateful pods, waiting for 3
Jan 30 12:54:20.305: INFO: Found 2 stateful pods, waiting for 3
Jan 30 12:54:30.362: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:54:30.362: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:54:30.362: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 30 12:54:40.415: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:54:40.415: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 12:54:40.415: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 30 12:54:40.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8tshv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 12:54:41.369: INFO: stderr: "I0130 12:54:40.875028    3449 log.go:172] (0xc00015c840) (0xc000764640) Create stream\nI0130 12:54:40.875986    3449 log.go:172] (0xc00015c840) (0xc000764640) Stream added, broadcasting: 1\nI0130 12:54:40.893441    3449 log.go:172] (0xc00015c840) Reply frame received for 1\nI0130 12:54:40.894080    3449 log.go:172] (0xc00015c840) (0xc0001cabe0) Create stream\nI0130 12:54:40.894201    3449 log.go:172] (0xc00015c840) (0xc0001cabe0) Stream added, broadcasting: 3\nI0130 12:54:40.904048    3449 log.go:172] (0xc00015c840) Reply frame received for 3\nI0130 12:54:40.904373    3449 log.go:172] (0xc00015c840) (0xc0003b4000) Create stream\nI0130 12:54:40.904413    3449 log.go:172] (0xc00015c840) (0xc0003b4000) Stream added, broadcasting: 5\nI0130 12:54:40.908200    3449 log.go:172] (0xc00015c840) Reply frame received for 5\nI0130 12:54:41.157689    3449 log.go:172] (0xc00015c840) Data frame received for 3\nI0130 12:54:41.157824    3449 log.go:172] (0xc0001cabe0) (3) Data frame handling\nI0130 12:54:41.157864    3449 log.go:172] (0xc0001cabe0) (3) Data frame sent\nI0130 12:54:41.356153    3449 log.go:172] (0xc00015c840) (0xc0001cabe0) Stream removed, broadcasting: 3\nI0130 12:54:41.356454    3449 log.go:172] (0xc00015c840) Data frame received for 1\nI0130 12:54:41.356556    3449 log.go:172] (0xc00015c840) (0xc0003b4000) Stream removed, broadcasting: 5\nI0130 12:54:41.356717    3449 log.go:172] (0xc000764640) (1) Data frame handling\nI0130 12:54:41.356754    3449 log.go:172] (0xc000764640) (1) Data frame sent\nI0130 12:54:41.356772    3449 log.go:172] (0xc00015c840) (0xc000764640) Stream removed, broadcasting: 1\nI0130 12:54:41.356861    3449 log.go:172] (0xc00015c840) Go away received\nI0130 12:54:41.358424    3449 log.go:172] (0xc00015c840) (0xc000764640) Stream removed, broadcasting: 1\nI0130 12:54:41.358475    3449 log.go:172] (0xc00015c840) (0xc0001cabe0) Stream removed, broadcasting: 3\nI0130 12:54:41.358487    3449 log.go:172] (0xc00015c840) (0xc0003b4000) Stream removed, broadcasting: 5\n"
Jan 30 12:54:41.369: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 12:54:41.370: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 12:54:41.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8tshv ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 12:54:42.515: INFO: stderr: "I0130 12:54:41.743767    3470 log.go:172] (0xc000138790) (0xc00088a5a0) Create stream\nI0130 12:54:41.744173    3470 log.go:172] (0xc000138790) (0xc00088a5a0) Stream added, broadcasting: 1\nI0130 12:54:41.749816    3470 log.go:172] (0xc000138790) Reply frame received for 1\nI0130 12:54:41.749942    3470 log.go:172] (0xc000138790) (0xc00088a640) Create stream\nI0130 12:54:41.749952    3470 log.go:172] (0xc000138790) (0xc00088a640) Stream added, broadcasting: 3\nI0130 12:54:41.751360    3470 log.go:172] (0xc000138790) Reply frame received for 3\nI0130 12:54:41.751427    3470 log.go:172] (0xc000138790) (0xc00068ec80) Create stream\nI0130 12:54:41.751438    3470 log.go:172] (0xc000138790) (0xc00068ec80) Stream added, broadcasting: 5\nI0130 12:54:41.752288    3470 log.go:172] (0xc000138790) Reply frame received for 5\nI0130 12:54:42.002926    3470 log.go:172] (0xc000138790) Data frame received for 3\nI0130 12:54:42.003707    3470 log.go:172] (0xc00088a640) (3) Data frame handling\nI0130 12:54:42.003826    3470 log.go:172] (0xc00088a640) (3) Data frame sent\nI0130 12:54:42.487379    3470 log.go:172] (0xc000138790) (0xc00088a640) Stream removed, broadcasting: 3\nI0130 12:54:42.488432    3470 log.go:172] (0xc000138790) Data frame received for 1\nI0130 12:54:42.488518    3470 log.go:172] (0xc00088a5a0) (1) Data frame handling\nI0130 12:54:42.488620    3470 log.go:172] (0xc00088a5a0) (1) Data frame sent\nI0130 12:54:42.488890    3470 log.go:172] (0xc000138790) (0xc00088a5a0) Stream removed, broadcasting: 1\nI0130 12:54:42.488997    3470 log.go:172] (0xc000138790) (0xc00068ec80) Stream removed, broadcasting: 5\nI0130 12:54:42.489197    3470 log.go:172] (0xc000138790) Go away received\nI0130 12:54:42.490096    3470 log.go:172] (0xc000138790) (0xc00088a5a0) Stream removed, broadcasting: 1\nI0130 12:54:42.490129    3470 log.go:172] (0xc000138790) (0xc00088a640) Stream removed, broadcasting: 3\nI0130 12:54:42.490142    3470 log.go:172] (0xc000138790) (0xc00068ec80) Stream removed, broadcasting: 5\n"
Jan 30 12:54:42.516: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 12:54:42.516: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 12:54:42.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8tshv ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 12:54:43.592: INFO: stderr: "I0130 12:54:43.119556    3491 log.go:172] (0xc0007e84d0) (0xc0006f94a0) Create stream\nI0130 12:54:43.120536    3491 log.go:172] (0xc0007e84d0) (0xc0006f94a0) Stream added, broadcasting: 1\nI0130 12:54:43.140520    3491 log.go:172] (0xc0007e84d0) Reply frame received for 1\nI0130 12:54:43.141005    3491 log.go:172] (0xc0007e84d0) (0xc000888000) Create stream\nI0130 12:54:43.141136    3491 log.go:172] (0xc0007e84d0) (0xc000888000) Stream added, broadcasting: 3\nI0130 12:54:43.154400    3491 log.go:172] (0xc0007e84d0) Reply frame received for 3\nI0130 12:54:43.154861    3491 log.go:172] (0xc0007e84d0) (0xc0006f9540) Create stream\nI0130 12:54:43.154891    3491 log.go:172] (0xc0007e84d0) (0xc0006f9540) Stream added, broadcasting: 5\nI0130 12:54:43.158105    3491 log.go:172] (0xc0007e84d0) Reply frame received for 5\nI0130 12:54:43.449343    3491 log.go:172] (0xc0007e84d0) Data frame received for 3\nI0130 12:54:43.449490    3491 log.go:172] (0xc000888000) (3) Data frame handling\nI0130 12:54:43.449553    3491 log.go:172] (0xc000888000) (3) Data frame sent\nI0130 12:54:43.583198    3491 log.go:172] (0xc0007e84d0) Data frame received for 1\nI0130 12:54:43.583343    3491 log.go:172] (0xc0007e84d0) (0xc0006f9540) Stream removed, broadcasting: 5\nI0130 12:54:43.583401    3491 log.go:172] (0xc0006f94a0) (1) Data frame handling\nI0130 12:54:43.583417    3491 log.go:172] (0xc0006f94a0) (1) Data frame sent\nI0130 12:54:43.583424    3491 log.go:172] (0xc0007e84d0) (0xc0006f94a0) Stream removed, broadcasting: 1\nI0130 12:54:43.583923    3491 log.go:172] (0xc0007e84d0) (0xc000888000) Stream removed, broadcasting: 3\nI0130 12:54:43.584166    3491 log.go:172] (0xc0007e84d0) Go away received\nI0130 12:54:43.584270    3491 log.go:172] (0xc0007e84d0) (0xc0006f94a0) Stream removed, broadcasting: 1\nI0130 12:54:43.584329    3491 log.go:172] (0xc0007e84d0) (0xc000888000) Stream removed, broadcasting: 3\nI0130 12:54:43.584363    3491 log.go:172] (0xc0007e84d0) (0xc0006f9540) Stream removed, broadcasting: 5\n"
Jan 30 12:54:43.593: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 12:54:43.593: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 12:54:43.593: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 12:54:43.623: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 30 12:54:53.912: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 12:54:53.912: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 12:54:53.912: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 12:54:54.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999996271s
Jan 30 12:54:55.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.879478058s
Jan 30 12:54:56.229: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.824205121s
Jan 30 12:54:57.261: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.758388622s
Jan 30 12:54:58.446: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.726932399s
Jan 30 12:55:00.098: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.541172154s
Jan 30 12:55:01.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.889423515s
Jan 30 12:55:02.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.867161156s
Jan 30 12:55:03.977: INFO: Verifying statefulset ss doesn't scale past 3 for another 54.930204ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-8tshv
Jan 30 12:55:04.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8tshv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:55:05.739: INFO: stderr: "I0130 12:55:05.423181    3510 log.go:172] (0xc000810420) (0xc00067b360) Create stream\nI0130 12:55:05.423648    3510 log.go:172] (0xc000810420) (0xc00067b360) Stream added, broadcasting: 1\nI0130 12:55:05.433823    3510 log.go:172] (0xc000810420) Reply frame received for 1\nI0130 12:55:05.433943    3510 log.go:172] (0xc000810420) (0xc00067b400) Create stream\nI0130 12:55:05.433954    3510 log.go:172] (0xc000810420) (0xc00067b400) Stream added, broadcasting: 3\nI0130 12:55:05.435526    3510 log.go:172] (0xc000810420) Reply frame received for 3\nI0130 12:55:05.435595    3510 log.go:172] (0xc000810420) (0xc0000f2000) Create stream\nI0130 12:55:05.435618    3510 log.go:172] (0xc000810420) (0xc0000f2000) Stream added, broadcasting: 5\nI0130 12:55:05.437015    3510 log.go:172] (0xc000810420) Reply frame received for 5\nI0130 12:55:05.581841    3510 log.go:172] (0xc000810420) Data frame received for 3\nI0130 12:55:05.581998    3510 log.go:172] (0xc00067b400) (3) Data frame handling\nI0130 12:55:05.582027    3510 log.go:172] (0xc00067b400) (3) Data frame sent\nI0130 12:55:05.725561    3510 log.go:172] (0xc000810420) Data frame received for 1\nI0130 12:55:05.725698    3510 log.go:172] (0xc000810420) (0xc00067b400) Stream removed, broadcasting: 3\nI0130 12:55:05.725737    3510 log.go:172] (0xc00067b360) (1) Data frame handling\nI0130 12:55:05.725784    3510 log.go:172] (0xc00067b360) (1) Data frame sent\nI0130 12:55:05.725899    3510 log.go:172] (0xc000810420) (0xc0000f2000) Stream removed, broadcasting: 5\nI0130 12:55:05.725949    3510 log.go:172] (0xc000810420) (0xc00067b360) Stream removed, broadcasting: 1\nI0130 12:55:05.725991    3510 log.go:172] (0xc000810420) Go away received\nI0130 12:55:05.726862    3510 log.go:172] (0xc000810420) (0xc00067b360) Stream removed, broadcasting: 1\nI0130 12:55:05.726898    3510 log.go:172] (0xc000810420) (0xc00067b400) Stream removed, broadcasting: 3\nI0130 12:55:05.726913    3510 log.go:172] (0xc000810420) (0xc0000f2000) Stream removed, broadcasting: 5\n"
Jan 30 12:55:05.740: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 12:55:05.740: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 12:55:05.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8tshv ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:55:06.732: INFO: stderr: "I0130 12:55:06.081092    3533 log.go:172] (0xc000736370) (0xc0007d0640) Create stream\nI0130 12:55:06.081858    3533 log.go:172] (0xc000736370) (0xc0007d0640) Stream added, broadcasting: 1\nI0130 12:55:06.133833    3533 log.go:172] (0xc000736370) Reply frame received for 1\nI0130 12:55:06.134756    3533 log.go:172] (0xc000736370) (0xc0006aac80) Create stream\nI0130 12:55:06.134977    3533 log.go:172] (0xc000736370) (0xc0006aac80) Stream added, broadcasting: 3\nI0130 12:55:06.140977    3533 log.go:172] (0xc000736370) Reply frame received for 3\nI0130 12:55:06.141460    3533 log.go:172] (0xc000736370) (0xc0006aadc0) Create stream\nI0130 12:55:06.141515    3533 log.go:172] (0xc000736370) (0xc0006aadc0) Stream added, broadcasting: 5\nI0130 12:55:06.145282    3533 log.go:172] (0xc000736370) Reply frame received for 5\nI0130 12:55:06.532566    3533 log.go:172] (0xc000736370) Data frame received for 3\nI0130 12:55:06.532778    3533 log.go:172] (0xc0006aac80) (3) Data frame handling\nI0130 12:55:06.532830    3533 log.go:172] (0xc0006aac80) (3) Data frame sent\nI0130 12:55:06.721135    3533 log.go:172] (0xc000736370) Data frame received for 1\nI0130 12:55:06.721227    3533 log.go:172] (0xc000736370) (0xc0006aadc0) Stream removed, broadcasting: 5\nI0130 12:55:06.721284    3533 log.go:172] (0xc0007d0640) (1) Data frame handling\nI0130 12:55:06.721294    3533 log.go:172] (0xc0007d0640) (1) Data frame sent\nI0130 12:55:06.721320    3533 log.go:172] (0xc000736370) (0xc0006aac80) Stream removed, broadcasting: 3\nI0130 12:55:06.721349    3533 log.go:172] (0xc000736370) (0xc0007d0640) Stream removed, broadcasting: 1\nI0130 12:55:06.721469    3533 log.go:172] (0xc000736370) Go away received\nI0130 12:55:06.721853    3533 log.go:172] (0xc000736370) (0xc0007d0640) Stream removed, broadcasting: 1\nI0130 12:55:06.721867    3533 log.go:172] (0xc000736370) (0xc0006aac80) Stream removed, broadcasting: 3\nI0130 12:55:06.721876    3533 log.go:172] (0xc000736370) (0xc0006aadc0) Stream removed, broadcasting: 5\n"
Jan 30 12:55:06.732: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 12:55:06.732: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 12:55:06.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8tshv ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 12:55:07.163: INFO: stderr: "I0130 12:55:06.899428    3554 log.go:172] (0xc0006a42c0) (0xc0006c8780) Create stream\nI0130 12:55:06.899681    3554 log.go:172] (0xc0006a42c0) (0xc0006c8780) Stream added, broadcasting: 1\nI0130 12:55:06.904037    3554 log.go:172] (0xc0006a42c0) Reply frame received for 1\nI0130 12:55:06.904088    3554 log.go:172] (0xc0006a42c0) (0xc0006c8820) Create stream\nI0130 12:55:06.904094    3554 log.go:172] (0xc0006a42c0) (0xc0006c8820) Stream added, broadcasting: 3\nI0130 12:55:06.904951    3554 log.go:172] (0xc0006a42c0) Reply frame received for 3\nI0130 12:55:06.904976    3554 log.go:172] (0xc0006a42c0) (0xc00078c780) Create stream\nI0130 12:55:06.904984    3554 log.go:172] (0xc0006a42c0) (0xc00078c780) Stream added, broadcasting: 5\nI0130 12:55:06.907686    3554 log.go:172] (0xc0006a42c0) Reply frame received for 5\nI0130 12:55:07.040428    3554 log.go:172] (0xc0006a42c0) Data frame received for 3\nI0130 12:55:07.040536    3554 log.go:172] (0xc0006c8820) (3) Data frame handling\nI0130 12:55:07.040593    3554 log.go:172] (0xc0006c8820) (3) Data frame sent\nI0130 12:55:07.153170    3554 log.go:172] (0xc0006a42c0) (0xc0006c8820) Stream removed, broadcasting: 3\nI0130 12:55:07.153648    3554 log.go:172] (0xc0006a42c0) Data frame received for 1\nI0130 12:55:07.153660    3554 log.go:172] (0xc0006c8780) (1) Data frame handling\nI0130 12:55:07.153678    3554 log.go:172] (0xc0006c8780) (1) Data frame sent\nI0130 12:55:07.153690    3554 log.go:172] (0xc0006a42c0) (0xc0006c8780) Stream removed, broadcasting: 1\nI0130 12:55:07.153994    3554 log.go:172] (0xc0006a42c0) (0xc00078c780) Stream removed, broadcasting: 5\nI0130 12:55:07.154122    3554 log.go:172] (0xc0006a42c0) Go away received\nI0130 12:55:07.154333    3554 log.go:172] (0xc0006a42c0) (0xc0006c8780) Stream removed, broadcasting: 1\nI0130 12:55:07.154414    3554 log.go:172] (0xc0006a42c0) (0xc0006c8820) Stream removed, broadcasting: 3\nI0130 12:55:07.154467    3554 log.go:172] (0xc0006a42c0) (0xc00078c780) Stream removed, broadcasting: 5\n"
Jan 30 12:55:07.163: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 12:55:07.163: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 12:55:07.163: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 30 12:55:37.386: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8tshv
Jan 30 12:55:37.399: INFO: Scaling statefulset ss to 0
Jan 30 12:55:37.416: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 12:55:37.419: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:55:37.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-8tshv" for this suite.
Jan 30 12:55:45.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 12:55:45.668: INFO: namespace: e2e-tests-statefulset-8tshv, resource: bindings, ignored listing per whitelist
Jan 30 12:55:45.789: INFO: namespace e2e-tests-statefulset-8tshv deletion completed in 8.313942933s

• [SLOW TEST:127.797 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 12:55:45.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-drntf
Jan 30 12:55:56.201: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-drntf
STEP: checking the pod's current state and verifying that restartCount is present
Jan 30 12:55:56.223: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 12:59:57.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-drntf" for this suite.
Jan 30 13:00:05.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:00:05.635: INFO: namespace: e2e-tests-container-probe-drntf, resource: bindings, ignored listing per whitelist
Jan 30 13:00:05.718: INFO: namespace e2e-tests-container-probe-drntf deletion completed in 8.206335525s

• [SLOW TEST:259.928 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:00:05.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan 30 13:00:06.628: INFO: Waiting up to 5m0s for pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j" in namespace "e2e-tests-svcaccounts-w8w4z" to be "success or failure"
Jan 30 13:00:06.653: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j": Phase="Pending", Reason="", readiness=false. Elapsed: 25.119549ms
Jan 30 13:00:08.809: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181060161s
Jan 30 13:00:10.842: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213480297s
Jan 30 13:00:12.903: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27510959s
Jan 30 13:00:15.174: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54605451s
Jan 30 13:00:17.198: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.569528806s
Jan 30 13:00:19.215: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.587169166s
Jan 30 13:00:21.499: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j": Phase="Pending", Reason="", readiness=false. Elapsed: 14.871302753s
Jan 30 13:00:23.515: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j": Phase="Pending", Reason="", readiness=false. Elapsed: 16.887062785s
Jan 30 13:00:25.575: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.947322813s
STEP: Saw pod success
Jan 30 13:00:25.576: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j" satisfied condition "success or failure"
Jan 30 13:00:25.585: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j container token-test: 
STEP: delete the pod
Jan 30 13:00:26.405: INFO: Waiting for pod pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j to disappear
Jan 30 13:00:26.419: INFO: Pod pod-service-account-70252280-4360-11ea-a47a-0242ac110005-6tv7j no longer exists
STEP: Creating a pod to test consume service account root CA
Jan 30 13:00:26.434: INFO: Waiting up to 5m0s for pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2" in namespace "e2e-tests-svcaccounts-w8w4z" to be "success or failure"
Jan 30 13:00:26.482: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2": Phase="Pending", Reason="", readiness=false. Elapsed: 48.353893ms
Jan 30 13:00:28.499: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064980837s
Jan 30 13:00:30.531: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096941969s
Jan 30 13:00:33.295: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.861433503s
Jan 30 13:00:36.655: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221424417s
Jan 30 13:00:38.678: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.244323318s
Jan 30 13:00:41.050: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.616177826s
Jan 30 13:00:43.081: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.647549565s
Jan 30 13:00:45.099: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.664858887s
STEP: Saw pod success
Jan 30 13:00:45.099: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2" satisfied condition "success or failure"
Jan 30 13:00:45.103: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2 container root-ca-test: 
STEP: delete the pod
Jan 30 13:00:45.258: INFO: Waiting for pod pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2 to disappear
Jan 30 13:00:45.280: INFO: Pod pod-service-account-70252280-4360-11ea-a47a-0242ac110005-8j9n2 no longer exists
STEP: Creating a pod to test consume service account namespace
Jan 30 13:00:45.311: INFO: Waiting up to 5m0s for pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb" in namespace "e2e-tests-svcaccounts-w8w4z" to be "success or failure"
Jan 30 13:00:45.442: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Pending", Reason="", readiness=false. Elapsed: 130.55993ms
Jan 30 13:00:47.466: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155463201s
Jan 30 13:00:49.496: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184542403s
Jan 30 13:00:51.961: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.650473635s
Jan 30 13:00:54.033: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.721678365s
Jan 30 13:00:56.221: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.909602733s
Jan 30 13:00:58.249: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.938116595s
Jan 30 13:01:00.449: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.138238042s
Jan 30 13:01:02.615: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.304001045s
Jan 30 13:01:04.710: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.398586764s
Jan 30 13:01:07.176: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.865419509s
STEP: Saw pod success
Jan 30 13:01:07.177: INFO: Pod "pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb" satisfied condition "success or failure"
Jan 30 13:01:07.195: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb container namespace-test: 
STEP: delete the pod
Jan 30 13:01:07.885: INFO: Waiting for pod pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb to disappear
Jan 30 13:01:07.916: INFO: Pod pod-service-account-70252280-4360-11ea-a47a-0242ac110005-xmssb no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:01:07.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-w8w4z" for this suite.
Jan 30 13:01:16.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:01:16.407: INFO: namespace: e2e-tests-svcaccounts-w8w4z, resource: bindings, ignored listing per whitelist
Jan 30 13:01:16.426: INFO: namespace e2e-tests-svcaccounts-w8w4z deletion completed in 8.398736878s

• [SLOW TEST:70.708 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:01:16.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 30 13:01:29.345: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Jan 30 13:03:01.391: INFO: Unexpected error occurred: timed out waiting for the condition
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-namespaces-zhgn6".
STEP: Found 0 events.
Jan 30 13:03:01.426: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Jan 30 13:03:01.427: INFO: test-pod-uninitialized                              hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:01:29 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:01:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:01:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:01:29 +0000 UTC  }]
Jan 30 13:03:01.427: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 30 13:03:01.427: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 30 13:03:01.427: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 30 13:03:01.427: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 30 13:03:01.427: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 30 13:03:01.427: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Jan 30 13:03:01.427: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 30 13:03:01.427: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:11:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:11:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Jan 30 13:03:01.427: INFO: 
Jan 30 13:03:01.436: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Jan 30 13:03:01.444: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:19975828,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-30 13:02:57 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-30 13:02:57 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-30 13:02:57 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-30 13:02:57 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717 nginx:latest] 126698067} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan 30 13:03:01.446: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Jan 30 13:03:01.456: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Jan 30 13:03:01.481: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Jan 30 13:03:01.481: INFO: 	Container weave ready: true, restart count 0
Jan 30 13:03:01.481: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 13:03:01.481: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 30 13:03:01.481: INFO: 	Container coredns ready: true, restart count 0
Jan 30 13:03:01.481: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 30 13:03:01.481: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 30 13:03:01.481: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 30 13:03:01.481: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 30 13:03:01.481: INFO: 	Container coredns ready: true, restart count 0
Jan 30 13:03:01.481: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Jan 30 13:03:01.481: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 13:03:01.481: INFO: test-pod-uninitialized started at 2020-01-30 13:01:29 +0000 UTC (0+1 container statuses recorded)
Jan 30 13:03:01.481: INFO: 	Container nginx ready: true, restart count 0
Jan 30 13:03:01.481: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
W0130 13:03:01.492454       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 13:03:01.575: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Jan 30 13:03:01.575: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:12.053286s}
Jan 30 13:03:01.575: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.020039s}
Jan 30 13:03:01.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-zhgn6" for this suite.
Jan 30 13:03:07.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:03:07.782: INFO: namespace: e2e-tests-namespaces-zhgn6, resource: bindings, ignored listing per whitelist
Jan 30 13:03:07.806: INFO: namespace e2e-tests-namespaces-zhgn6 deletion completed in 6.218887144s
STEP: Destroying namespace "e2e-tests-nsdeletetest-99jxn" for this suite.
Jan 30 13:03:07.814: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-99jxn": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-99jxn": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-99jxn\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc001eae600), Code:409}})

• Failure [111.390 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc0000a18b0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:03:07.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 30 13:03:08.111: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 30 13:03:08.221: INFO: Number of nodes with available pods: 0
Jan 30 13:03:08.221: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 30 13:03:08.384: INFO: Number of nodes with available pods: 0
Jan 30 13:03:08.384: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:09.405: INFO: Number of nodes with available pods: 0
Jan 30 13:03:09.406: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:10.398: INFO: Number of nodes with available pods: 0
Jan 30 13:03:10.399: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:11.398: INFO: Number of nodes with available pods: 0
Jan 30 13:03:11.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:12.440: INFO: Number of nodes with available pods: 0
Jan 30 13:03:12.440: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:13.418: INFO: Number of nodes with available pods: 0
Jan 30 13:03:13.418: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:14.644: INFO: Number of nodes with available pods: 0
Jan 30 13:03:14.644: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:15.398: INFO: Number of nodes with available pods: 0
Jan 30 13:03:15.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:16.495: INFO: Number of nodes with available pods: 0
Jan 30 13:03:16.496: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:17.399: INFO: Number of nodes with available pods: 0
Jan 30 13:03:17.399: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:18.398: INFO: Number of nodes with available pods: 1
Jan 30 13:03:18.398: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 30 13:03:18.556: INFO: Number of nodes with available pods: 1
Jan 30 13:03:18.556: INFO: Number of running nodes: 0, number of available pods: 1
Jan 30 13:03:19.568: INFO: Number of nodes with available pods: 0
Jan 30 13:03:19.569: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 30 13:03:19.617: INFO: Number of nodes with available pods: 0
Jan 30 13:03:19.617: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:20.660: INFO: Number of nodes with available pods: 0
Jan 30 13:03:20.660: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:21.807: INFO: Number of nodes with available pods: 0
Jan 30 13:03:21.807: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:22.646: INFO: Number of nodes with available pods: 0
Jan 30 13:03:22.646: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:23.645: INFO: Number of nodes with available pods: 0
Jan 30 13:03:23.645: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:24.697: INFO: Number of nodes with available pods: 0
Jan 30 13:03:24.697: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:25.661: INFO: Number of nodes with available pods: 0
Jan 30 13:03:25.661: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:26.668: INFO: Number of nodes with available pods: 0
Jan 30 13:03:26.668: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:27.648: INFO: Number of nodes with available pods: 0
Jan 30 13:03:27.649: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:28.658: INFO: Number of nodes with available pods: 0
Jan 30 13:03:28.659: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:29.776: INFO: Number of nodes with available pods: 0
Jan 30 13:03:29.776: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:30.643: INFO: Number of nodes with available pods: 0
Jan 30 13:03:30.643: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:31.643: INFO: Number of nodes with available pods: 0
Jan 30 13:03:31.644: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:32.643: INFO: Number of nodes with available pods: 0
Jan 30 13:03:32.643: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:33.663: INFO: Number of nodes with available pods: 0
Jan 30 13:03:33.663: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:35.255: INFO: Number of nodes with available pods: 0
Jan 30 13:03:35.255: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:35.752: INFO: Number of nodes with available pods: 0
Jan 30 13:03:35.752: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:36.713: INFO: Number of nodes with available pods: 0
Jan 30 13:03:36.713: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:37.628: INFO: Number of nodes with available pods: 0
Jan 30 13:03:37.628: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 30 13:03:38.635: INFO: Number of nodes with available pods: 1
Jan 30 13:03:38.636: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6s798, will wait for the garbage collector to delete the pods
Jan 30 13:03:38.717: INFO: Deleting DaemonSet.extensions daemon-set took: 17.565814ms
Jan 30 13:03:38.817: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.620169ms
Jan 30 13:03:47.668: INFO: Number of nodes with available pods: 0
Jan 30 13:03:47.668: INFO: Number of running nodes: 0, number of available pods: 0
Jan 30 13:03:47.674: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6s798/daemonsets","resourceVersion":"19975943"},"items":null}

Jan 30 13:03:47.680: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6s798/pods","resourceVersion":"19975943"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:03:47.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6s798" for this suite.
Jan 30 13:03:54.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:03:54.245: INFO: namespace: e2e-tests-daemonsets-6s798, resource: bindings, ignored listing per whitelist
Jan 30 13:03:54.274: INFO: namespace e2e-tests-daemonsets-6s798 deletion completed in 6.347077971s

• [SLOW TEST:46.457 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:03:54.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 30 13:03:54.681: INFO: Waiting up to 5m0s for pod "downward-api-f8041058-4360-11ea-a47a-0242ac110005" in namespace "e2e-tests-downward-api-dhr6q" to be "success or failure"
Jan 30 13:03:54.905: INFO: Pod "downward-api-f8041058-4360-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 223.21977ms
Jan 30 13:03:56.927: INFO: Pod "downward-api-f8041058-4360-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245449258s
Jan 30 13:03:58.940: INFO: Pod "downward-api-f8041058-4360-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258566904s
Jan 30 13:04:00.958: INFO: Pod "downward-api-f8041058-4360-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276732495s
Jan 30 13:04:03.076: INFO: Pod "downward-api-f8041058-4360-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394581916s
Jan 30 13:04:05.099: INFO: Pod "downward-api-f8041058-4360-11ea-a47a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.417483418s
Jan 30 13:04:07.124: INFO: Pod "downward-api-f8041058-4360-11ea-a47a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.442350654s
STEP: Saw pod success
Jan 30 13:04:07.124: INFO: Pod "downward-api-f8041058-4360-11ea-a47a-0242ac110005" satisfied condition "success or failure"
Jan 30 13:04:07.138: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f8041058-4360-11ea-a47a-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 30 13:04:07.886: INFO: Waiting for pod downward-api-f8041058-4360-11ea-a47a-0242ac110005 to disappear
Jan 30 13:04:07.938: INFO: Pod downward-api-f8041058-4360-11ea-a47a-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:04:07.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dhr6q" for this suite.
Jan 30 13:04:14.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:04:14.194: INFO: namespace: e2e-tests-downward-api-dhr6q, resource: bindings, ignored listing per whitelist
Jan 30 13:04:14.244: INFO: namespace e2e-tests-downward-api-dhr6q deletion completed in 6.270590563s

• [SLOW TEST:19.969 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:04:14.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:04:24.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-fzjjc" for this suite.
Jan 30 13:05:08.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:05:08.904: INFO: namespace: e2e-tests-kubelet-test-fzjjc, resource: bindings, ignored listing per whitelist
Jan 30 13:05:09.109: INFO: namespace e2e-tests-kubelet-test-fzjjc deletion completed in 44.463696983s

• [SLOW TEST:54.864 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:05:09.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0130 13:05:23.830659       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 13:05:23.830: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:05:23.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-zlb2c" for this suite.
Jan 30 13:06:02.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:06:03.076: INFO: namespace: e2e-tests-gc-zlb2c, resource: bindings, ignored listing per whitelist
Jan 30 13:06:03.184: INFO: namespace e2e-tests-gc-zlb2c deletion completed in 39.317914444s

• [SLOW TEST:54.074 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:06:03.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-461c8233-4361-11ea-a47a-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-461c8233-4361-11ea-a47a-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:07:45.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cnqp7" for this suite.
Jan 30 13:08:09.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:08:09.453: INFO: namespace: e2e-tests-configmap-cnqp7, resource: bindings, ignored listing per whitelist
Jan 30 13:08:09.467: INFO: namespace e2e-tests-configmap-cnqp7 deletion completed in 24.280541012s

• [SLOW TEST:126.282 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:08:09.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 30 13:08:09.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:12.088: INFO: stderr: ""
Jan 30 13:08:12.089: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 13:08:12.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:12.324: INFO: stderr: ""
Jan 30 13:08:12.325: INFO: stdout: "update-demo-nautilus-n9clz "
STEP: Replicas for name=update-demo: expected=2 actual=1
Jan 30 13:08:17.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:17.503: INFO: stderr: ""
Jan 30 13:08:17.503: INFO: stdout: "update-demo-nautilus-n9clz update-demo-nautilus-zkvqt "
Jan 30 13:08:17.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9clz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:17.630: INFO: stderr: ""
Jan 30 13:08:17.630: INFO: stdout: ""
Jan 30 13:08:17.630: INFO: update-demo-nautilus-n9clz is created but not running
Jan 30 13:08:22.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:22.824: INFO: stderr: ""
Jan 30 13:08:22.824: INFO: stdout: "update-demo-nautilus-n9clz update-demo-nautilus-zkvqt "
Jan 30 13:08:22.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9clz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:23.077: INFO: stderr: ""
Jan 30 13:08:23.078: INFO: stdout: ""
Jan 30 13:08:23.078: INFO: update-demo-nautilus-n9clz is created but not running
Jan 30 13:08:28.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:28.269: INFO: stderr: ""
Jan 30 13:08:28.269: INFO: stdout: "update-demo-nautilus-n9clz update-demo-nautilus-zkvqt "
Jan 30 13:08:28.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9clz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:28.430: INFO: stderr: ""
Jan 30 13:08:28.431: INFO: stdout: "true"
Jan 30 13:08:28.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9clz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:28.675: INFO: stderr: ""
Jan 30 13:08:28.676: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 13:08:28.676: INFO: validating pod update-demo-nautilus-n9clz
Jan 30 13:08:28.694: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 13:08:28.694: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 13:08:28.695: INFO: update-demo-nautilus-n9clz is verified up and running
Jan 30 13:08:28.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zkvqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:28.813: INFO: stderr: ""
Jan 30 13:08:28.814: INFO: stdout: "true"
Jan 30 13:08:28.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zkvqt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:28.987: INFO: stderr: ""
Jan 30 13:08:28.988: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 13:08:28.988: INFO: validating pod update-demo-nautilus-zkvqt
Jan 30 13:08:28.999: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 13:08:28.999: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 13:08:28.999: INFO: update-demo-nautilus-zkvqt is verified up and running
STEP: using delete to clean up resources
Jan 30 13:08:29.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:29.151: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 13:08:29.151: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 30 13:08:29.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-592rr'
Jan 30 13:08:29.348: INFO: stderr: "No resources found.\n"
Jan 30 13:08:29.348: INFO: stdout: ""
Jan 30 13:08:29.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-592rr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 30 13:08:29.575: INFO: stderr: ""
Jan 30 13:08:29.575: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:08:29.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-592rr" for this suite.
Jan 30 13:08:55.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:08:55.720: INFO: namespace: e2e-tests-kubectl-592rr, resource: bindings, ignored listing per whitelist
Jan 30 13:08:55.877: INFO: namespace e2e-tests-kubectl-592rr deletion completed in 26.273821778s

• [SLOW TEST:46.409 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:08:55.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-htbhr
Jan 30 13:09:08.343: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-htbhr
STEP: checking the pod's current state and verifying that restartCount is present
Jan 30 13:09:08.349: INFO: Initial restart count of pod liveness-exec is 0
Jan 30 13:10:02.063: INFO: Restart count of pod e2e-tests-container-probe-htbhr/liveness-exec is now 1 (53.713886963s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:10:02.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-htbhr" for this suite.
Jan 30 13:10:12.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:10:12.975: INFO: namespace: e2e-tests-container-probe-htbhr, resource: bindings, ignored listing per whitelist
Jan 30 13:10:13.002: INFO: namespace e2e-tests-container-probe-htbhr deletion completed in 10.777141825s

• [SLOW TEST:77.125 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:10:13.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 30 13:10:13.192: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 30 13:10:13.252: INFO: Waiting for terminating namespaces to be deleted...
Jan 30 13:10:13.257: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 30 13:10:13.275: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 30 13:10:13.275: INFO: 	Container coredns ready: true, restart count 0
Jan 30 13:10:13.275: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 30 13:10:13.275: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 13:10:13.275: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 30 13:10:13.275: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 30 13:10:13.275: INFO: 	Container weave ready: true, restart count 0
Jan 30 13:10:13.275: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 13:10:13.275: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 30 13:10:13.275: INFO: 	Container coredns ready: true, restart count 0
Jan 30 13:10:13.275: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 30 13:10:13.275: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 30 13:10:13.275: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan 30 13:10:13.399: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 30 13:10:13.399: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 30 13:10:13.399: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 30 13:10:13.399: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan 30 13:10:13.399: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan 30 13:10:13.399: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 30 13:10:13.399: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 30 13:10:13.399: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d9d91917-4361-11ea-a47a-0242ac110005.15eeac297d70aa48], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-8frbk/filler-pod-d9d91917-4361-11ea-a47a-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d9d91917-4361-11ea-a47a-0242ac110005.15eeac2afcd0e030], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d9d91917-4361-11ea-a47a-0242ac110005.15eeac2b86ddf447], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d9d91917-4361-11ea-a47a-0242ac110005.15eeac2bba2bf76d], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15eeac2c4ab2b7f1], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:10:26.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-8frbk" for this suite.
Jan 30 13:10:37.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:10:37.789: INFO: namespace: e2e-tests-sched-pred-8frbk, resource: bindings, ignored listing per whitelist
Jan 30 13:10:37.894: INFO: namespace e2e-tests-sched-pred-8frbk deletion completed in 11.089383725s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:24.891 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:10:37.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:10:38.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-lstb7" for this suite.
Jan 30 13:10:46.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:10:46.517: INFO: namespace: e2e-tests-kubelet-test-lstb7, resource: bindings, ignored listing per whitelist
Jan 30 13:10:46.700: INFO: namespace e2e-tests-kubelet-test-lstb7 deletion completed in 8.350021988s

• [SLOW TEST:8.804 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 30 13:10:46.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 30 13:10:57.659: INFO: Successfully updated pod "pod-update-activedeadlineseconds-edd651ec-4361-11ea-a47a-0242ac110005"
Jan 30 13:10:57.659: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-edd651ec-4361-11ea-a47a-0242ac110005" in namespace "e2e-tests-pods-jvhmp" to be "terminated due to deadline exceeded"
Jan 30 13:10:57.669: INFO: Pod "pod-update-activedeadlineseconds-edd651ec-4361-11ea-a47a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.965657ms
Jan 30 13:10:59.689: INFO: Pod "pod-update-activedeadlineseconds-edd651ec-4361-11ea-a47a-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.030185219s
Jan 30 13:10:59.690: INFO: Pod "pod-update-activedeadlineseconds-edd651ec-4361-11ea-a47a-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 30 13:10:59.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jvhmp" for this suite.
Jan 30 13:11:07.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 13:11:07.944: INFO: namespace: e2e-tests-pods-jvhmp, resource: bindings, ignored listing per whitelist
Jan 30 13:11:08.052: INFO: namespace e2e-tests-pods-jvhmp deletion completed in 8.295873493s

• [SLOW TEST:21.352 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSJan 30 13:11:08.052: INFO: Running AfterSuite actions on all nodes
Jan 30 13:11:08.052: INFO: Running AfterSuite actions on node 1
Jan 30 13:11:08.052: INFO: Skipping dumping logs from cluster


Summarizing 2 Failures:

[Fail] [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook [It] should execute poststart exec hook properly [NodeConformance] [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 199 of 2164 Specs in 8631.768 seconds
FAIL! -- 197 Passed | 2 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (8632.33s)
FAIL