I0908 23:40:57.577816 6 e2e.go:243] Starting e2e run "d5244ffa-5e0a-4101-876b-8e6da8386968" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1599608456 - Will randomize all specs Will run 215 of 4413 specs Sep 8 23:40:57.766: INFO: >>> kubeConfig: /root/.kube/config Sep 8 23:40:57.771: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 8 23:40:57.794: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 8 23:40:57.832: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 8 23:40:57.832: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 8 23:40:57.832: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 8 23:40:57.846: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 8 23:40:57.846: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 8 23:40:57.846: INFO: e2e test version: v1.15.12 Sep 8 23:40:57.847: INFO: kube-apiserver version: v1.15.13-beta.0.1+a34f1e483104bd SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:40:57.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Sep 8 23:40:57.914: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1c629730-2856-4bb4-961f-2765cff1ac11 STEP: Creating a pod to test consume configMaps Sep 8 23:40:57.922: INFO: Waiting up to 5m0s for pod "pod-configmaps-382a614f-b718-436c-8c19-711a15bfc5bd" in namespace "configmap-2581" to be "success or failure" Sep 8 23:40:57.938: INFO: Pod "pod-configmaps-382a614f-b718-436c-8c19-711a15bfc5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.807256ms Sep 8 23:40:59.942: INFO: Pod "pod-configmaps-382a614f-b718-436c-8c19-711a15bfc5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020010771s Sep 8 23:41:01.946: INFO: Pod "pod-configmaps-382a614f-b718-436c-8c19-711a15bfc5bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023882948s STEP: Saw pod success Sep 8 23:41:01.946: INFO: Pod "pod-configmaps-382a614f-b718-436c-8c19-711a15bfc5bd" satisfied condition "success or failure" Sep 8 23:41:01.949: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-382a614f-b718-436c-8c19-711a15bfc5bd container configmap-volume-test: STEP: delete the pod Sep 8 23:41:01.970: INFO: Waiting for pod pod-configmaps-382a614f-b718-436c-8c19-711a15bfc5bd to disappear Sep 8 23:41:01.975: INFO: Pod pod-configmaps-382a614f-b718-436c-8c19-711a15bfc5bd no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:41:01.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2581" for this suite. Sep 8 23:41:08.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:41:08.087: INFO: namespace configmap-2581 deletion completed in 6.085357455s • [SLOW TEST:10.239 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:41:08.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Sep 8 23:41:08.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5649' Sep 8 23:41:10.431: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 8 23:41:10.431: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Sep 8 23:41:10.439: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Sep 8 23:41:10.448: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Sep 8 23:41:10.462: INFO: scanned /root for discovery docs: Sep 8 23:41:10.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5649' Sep 8 23:41:26.299: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Sep 8 23:41:26.299: INFO: stdout: "Created e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c\nScaling up e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Sep 8 23:41:26.299: INFO: stdout: "Created e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c\nScaling up e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Sep 8 23:41:26.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5649' Sep 8 23:41:26.397: INFO: stderr: "" Sep 8 23:41:26.397: INFO: stdout: "e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c-mk24n " Sep 8 23:41:26.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c-mk24n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5649' Sep 8 23:41:26.491: INFO: stderr: "" Sep 8 23:41:26.491: INFO: stdout: "true" Sep 8 23:41:26.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c-mk24n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5649' Sep 8 23:41:26.592: INFO: stderr: "" Sep 8 23:41:26.592: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Sep 8 23:41:26.592: INFO: e2e-test-nginx-rc-befbbb8c7ce98963b5f27d45e7dfa92c-mk24n is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Sep 8 23:41:26.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5649' Sep 8 23:41:26.686: INFO: stderr: "" Sep 8 23:41:26.686: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:41:26.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5649" for this suite. Sep 8 23:41:32.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:41:32.833: INFO: namespace kubectl-5649 deletion completed in 6.130661717s • [SLOW TEST:24.746 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:41:32.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Sep 8 23:41:32.914: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:41:43.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1872" for this suite. Sep 8 23:42:05.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:42:05.109: INFO: namespace init-container-1872 deletion completed in 22.084736205s • [SLOW TEST:32.276 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:42:05.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 8 23:42:13.214: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 8 23:42:13.239: INFO: Pod pod-with-prestop-http-hook still exists Sep 8 23:42:15.239: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 8 23:42:15.244: INFO: Pod pod-with-prestop-http-hook still exists Sep 8 23:42:17.239: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 8 23:42:17.242: INFO: Pod pod-with-prestop-http-hook still exists Sep 8 23:42:19.239: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 8 23:42:19.243: INFO: Pod pod-with-prestop-http-hook still exists Sep 8 23:42:21.239: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 8 23:42:21.244: INFO: Pod pod-with-prestop-http-hook still exists Sep 8 23:42:23.239: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 8 23:42:23.243: INFO: Pod pod-with-prestop-http-hook still exists Sep 8 23:42:25.239: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 8 23:42:25.243: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:42:25.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6689" for this suite. Sep 8 23:42:47.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:42:47.363: INFO: namespace container-lifecycle-hook-6689 deletion completed in 22.108063902s • [SLOW TEST:42.253 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:42:47.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-bba41b48-ca5c-4c02-b349-a99d6235fa10 STEP: Creating a pod to test consume configMaps Sep 8 23:42:48.200: INFO: Waiting up to 5m0s for pod "pod-configmaps-d70714f9-7643-44ff-a052-51122a71ea78" in namespace "configmap-1147" to be "success or failure" Sep 8 23:42:48.217: INFO: Pod "pod-configmaps-d70714f9-7643-44ff-a052-51122a71ea78": Phase="Pending", Reason="", readiness=false. Elapsed: 16.786548ms Sep 8 23:42:50.223: INFO: Pod "pod-configmaps-d70714f9-7643-44ff-a052-51122a71ea78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023247837s Sep 8 23:42:52.227: INFO: Pod "pod-configmaps-d70714f9-7643-44ff-a052-51122a71ea78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027155161s STEP: Saw pod success Sep 8 23:42:52.227: INFO: Pod "pod-configmaps-d70714f9-7643-44ff-a052-51122a71ea78" satisfied condition "success or failure" Sep 8 23:42:52.230: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d70714f9-7643-44ff-a052-51122a71ea78 container configmap-volume-test: STEP: delete the pod Sep 8 23:42:52.265: INFO: Waiting for pod pod-configmaps-d70714f9-7643-44ff-a052-51122a71ea78 to disappear Sep 8 23:42:52.385: INFO: Pod pod-configmaps-d70714f9-7643-44ff-a052-51122a71ea78 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:42:52.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1147" for this suite. Sep 8 23:42:58.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:42:58.511: INFO: namespace configmap-1147 deletion completed in 6.121552003s • [SLOW TEST:11.148 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:42:58.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-dcbvw in namespace proxy-8396 I0908 23:42:58.661261 6 runners.go:180] Created replication controller with name: proxy-service-dcbvw, namespace: proxy-8396, replica count: 1 I0908 23:42:59.711733 6 runners.go:180] proxy-service-dcbvw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0908 23:43:00.711944 6 runners.go:180] proxy-service-dcbvw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0908 23:43:01.712302 6 runners.go:180] proxy-service-dcbvw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0908 23:43:02.712518 6 runners.go:180] proxy-service-dcbvw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0908 23:43:03.712735 6 runners.go:180] proxy-service-dcbvw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0908 23:43:04.712978 6 runners.go:180] proxy-service-dcbvw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0908 23:43:05.713334 6 runners.go:180] proxy-service-dcbvw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0908 23:43:06.713587 6 runners.go:180] proxy-service-dcbvw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0908 23:43:07.713854 6 runners.go:180] proxy-service-dcbvw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0908 23:43:08.714051 6 runners.go:180] proxy-service-dcbvw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 8 23:43:08.718: INFO: setup took 10.114745695s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Sep 8 23:43:08.724: INFO: (0) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 6.523061ms) Sep 8 23:43:08.724: INFO: (0) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 6.449336ms) Sep 8 23:43:08.726: INFO: (0) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 8.664846ms) Sep 8 23:43:08.726: INFO: (0) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 8.583183ms) Sep 8 23:43:08.729: INFO: (0) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 11.196643ms) Sep 8 23:43:08.729: INFO: (0) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 11.460356ms) Sep 8 23:43:08.730: INFO: (0) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 11.914942ms) Sep 8 23:43:08.730: INFO: (0) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 12.032752ms) Sep 8 23:43:08.732: INFO: (0) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 14.568191ms) Sep 8 23:43:08.733: INFO: (0) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 14.716314ms) Sep 8 23:43:08.733: INFO: (0) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 15.232409ms) Sep 8 23:43:08.733: INFO: (0) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 15.354755ms) Sep 8 23:43:08.734: INFO: (0) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 16.107795ms) Sep 8 23:43:08.734: INFO: (0) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 16.376542ms) Sep 8 23:43:08.735: INFO: (0) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: test (200; 7.246917ms) Sep 8 23:43:08.745: INFO: (1) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 7.490243ms) Sep 8 23:43:08.745: INFO: (1) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 7.561063ms) Sep 8 23:43:08.747: INFO: (1) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 9.687024ms) Sep 8 23:43:08.747: INFO: (1) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 9.887086ms) Sep 8 23:43:08.747: INFO: (1) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 10.180543ms) Sep 8 23:43:08.747: INFO: (1) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 10.001179ms) Sep 8 23:43:08.747: INFO: (1) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: ... (200; 10.106653ms) Sep 8 23:43:08.747: INFO: (1) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 10.018904ms) Sep 8 23:43:08.748: INFO: (1) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 10.677412ms) Sep 8 23:43:08.748: INFO: (1) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 10.957333ms) Sep 8 23:43:08.748: INFO: (1) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 10.949536ms) Sep 8 23:43:08.748: INFO: (1) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 10.863663ms) Sep 8 23:43:08.759: INFO: (1) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 21.633188ms) Sep 8 23:43:08.763: INFO: (2) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 3.550678ms) Sep 8 23:43:08.763: INFO: (2) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.630624ms) Sep 8 23:43:08.763: INFO: (2) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.578018ms) Sep 8 23:43:08.763: INFO: (2) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 3.637886ms) Sep 8 23:43:08.763: INFO: (2) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 3.72643ms) Sep 8 23:43:08.763: INFO: (2) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: test (200; 3.684317ms) Sep 8 23:43:08.763: INFO: (2) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.667175ms) Sep 8 23:43:08.763: INFO: (2) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 3.703874ms) Sep 8 23:43:08.763: INFO: (2) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.678999ms) Sep 8 23:43:08.764: INFO: (2) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 4.943171ms) Sep 8 23:43:08.764: INFO: (2) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 4.928612ms) Sep 8 23:43:08.764: INFO: (2) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 4.869673ms) Sep 8 23:43:08.764: INFO: (2) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 5.184704ms) Sep 8 23:43:08.764: INFO: (2) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 5.375898ms) Sep 8 23:43:08.764: INFO: (2) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 5.503594ms) Sep 8 23:43:08.767: INFO: (3) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 2.621702ms) Sep 8 23:43:08.767: INFO: (3) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 2.866185ms) Sep 8 23:43:08.768: INFO: (3) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 3.113839ms) Sep 8 23:43:08.769: INFO: (3) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 4.499549ms) Sep 8 23:43:08.769: INFO: (3) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 4.89218ms) Sep 8 23:43:08.770: INFO: (3) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 5.295858ms) Sep 8 23:43:08.770: INFO: (3) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: test (200; 5.381043ms) Sep 8 23:43:08.770: INFO: (3) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 5.414567ms) Sep 8 23:43:08.770: INFO: (3) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 5.382154ms) Sep 8 23:43:08.770: INFO: (3) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 5.350443ms) Sep 8 23:43:08.770: INFO: (3) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 5.47879ms) Sep 8 23:43:08.770: INFO: (3) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 5.726744ms) Sep 8 23:43:08.770: INFO: (3) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 5.839373ms) Sep 8 23:43:08.771: INFO: (3) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 5.894504ms) Sep 8 23:43:08.772: INFO: (3) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 7.429542ms) Sep 8 23:43:08.775: INFO: (4) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 2.543115ms) Sep 8 23:43:08.775: INFO: (4) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 2.578852ms) Sep 8 23:43:08.776: INFO: (4) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 3.522934ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 4.397395ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 4.44688ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 4.450219ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 4.508693ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 4.47457ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 4.481625ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: ... (200; 4.578064ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 4.811507ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 4.813108ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 4.793959ms) Sep 8 23:43:08.777: INFO: (4) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 4.838505ms) Sep 8 23:43:08.781: INFO: (5) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: test (200; 4.071918ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 4.818609ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 4.888228ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 4.926019ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 4.86885ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 4.901433ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 4.932899ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 4.877309ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 4.989981ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 5.04777ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 5.079411ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 5.217555ms) Sep 8 23:43:08.782: INFO: (5) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 5.375125ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 4.253219ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 4.375576ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 4.576214ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 4.477055ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 4.552466ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 4.714854ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 4.739085ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 4.808634ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 4.875577ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 4.854387ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 4.850284ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 4.872034ms) Sep 8 23:43:08.787: INFO: (6) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: ... (200; 5.349249ms) Sep 8 23:43:08.791: INFO: (7) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 2.671713ms) Sep 8 23:43:08.792: INFO: (7) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 3.79938ms) Sep 8 23:43:08.792: INFO: (7) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 3.854928ms) Sep 8 23:43:08.792: INFO: (7) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.889062ms) Sep 8 23:43:08.792: INFO: (7) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.870829ms) Sep 8 23:43:08.792: INFO: (7) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: ... (200; 3.032857ms) Sep 8 23:43:08.798: INFO: (8) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 3.235312ms) Sep 8 23:43:08.798: INFO: (8) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 3.265957ms) Sep 8 23:43:08.798: INFO: (8) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 3.30888ms) Sep 8 23:43:08.798: INFO: (8) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: test<... (200; 4.184109ms) Sep 8 23:43:08.799: INFO: (8) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 4.189859ms) Sep 8 23:43:08.799: INFO: (8) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 4.255938ms) Sep 8 23:43:08.799: INFO: (8) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 4.228312ms) Sep 8 23:43:08.799: INFO: (8) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 4.350109ms) Sep 8 23:43:08.801: INFO: (9) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 2.441145ms) Sep 8 23:43:08.801: INFO: (9) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 2.389026ms) Sep 8 23:43:08.802: INFO: (9) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 3.307487ms) Sep 8 23:43:08.802: INFO: (9) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 3.305541ms) Sep 8 23:43:08.802: INFO: (9) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.30305ms) Sep 8 23:43:08.802: INFO: (9) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.3959ms) Sep 8 23:43:08.802: INFO: (9) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 3.388703ms) Sep 8 23:43:08.802: INFO: (9) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 3.343853ms) Sep 8 23:43:08.802: INFO: (9) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 3.432312ms) Sep 8 23:43:08.802: INFO: (9) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: test<... (200; 3.785636ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.816001ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 3.850975ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 3.854585ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: test (200; 3.964388ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 4.012164ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 4.057962ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 4.052559ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 4.090927ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 4.157102ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 4.08319ms) Sep 8 23:43:08.808: INFO: (10) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 4.219486ms) Sep 8 23:43:08.812: INFO: (11) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 3.622498ms) Sep 8 23:43:08.813: INFO: (11) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 4.216807ms) Sep 8 23:43:08.813: INFO: (11) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 4.378637ms) Sep 8 23:43:08.813: INFO: (11) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 4.484171ms) Sep 8 23:43:08.813: INFO: (11) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 4.668659ms) Sep 8 23:43:08.813: INFO: (11) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 5.015585ms) Sep 8 23:43:08.813: INFO: (11) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 5.018127ms) Sep 8 23:43:08.813: INFO: (11) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 5.030371ms) Sep 8 23:43:08.814: INFO: (11) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 5.210646ms) Sep 8 23:43:08.814: INFO: (11) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 5.287479ms) Sep 8 23:43:08.814: INFO: (11) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 5.22916ms) Sep 8 23:43:08.814: INFO: (11) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 5.321927ms) Sep 8 23:43:08.814: INFO: (11) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: test (200; 5.239049ms) Sep 8 23:43:08.814: INFO: (11) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 5.266049ms) Sep 8 23:43:08.814: INFO: (11) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 5.3153ms) Sep 8 23:43:08.816: INFO: (12) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: ... (200; 2.93932ms) Sep 8 23:43:08.817: INFO: (12) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 3.194953ms) Sep 8 23:43:08.817: INFO: (12) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.18771ms) Sep 8 23:43:08.817: INFO: (12) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 3.209984ms) Sep 8 23:43:08.817: INFO: (12) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 3.541927ms) Sep 8 23:43:08.817: INFO: (12) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 3.582151ms) Sep 8 23:43:08.817: INFO: (12) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.563389ms) Sep 8 23:43:08.817: INFO: (12) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 3.624609ms) Sep 8 23:43:08.817: INFO: (12) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.618459ms) Sep 8 23:43:08.817: INFO: (12) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 3.642417ms) Sep 8 23:43:08.817: INFO: (12) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.624267ms) Sep 8 23:43:08.818: INFO: (12) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 3.987912ms) Sep 8 23:43:08.818: INFO: (12) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 4.124536ms) Sep 8 23:43:08.818: INFO: (12) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 4.196812ms) Sep 8 23:43:08.818: INFO: (12) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 4.352123ms) Sep 8 23:43:08.821: INFO: (13) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 2.769627ms) Sep 8 23:43:08.821: INFO: (13) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 2.849594ms) Sep 8 23:43:08.821: INFO: (13) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 2.911136ms) Sep 8 23:43:08.821: INFO: (13) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.303254ms) Sep 8 23:43:08.821: INFO: (13) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 3.265806ms) Sep 8 23:43:08.821: INFO: (13) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.241791ms) Sep 8 23:43:08.822: INFO: (13) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 3.283419ms) Sep 8 23:43:08.822: INFO: (13) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 3.284593ms) Sep 8 23:43:08.822: INFO: (13) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: ... (200; 2.020158ms) Sep 8 23:43:08.825: INFO: (14) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: test (200; 3.328134ms) Sep 8 23:43:08.826: INFO: (14) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 3.382122ms) Sep 8 23:43:08.826: INFO: (14) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 3.587547ms) Sep 8 23:43:08.826: INFO: (14) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 3.669632ms) Sep 8 23:43:08.827: INFO: (14) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 3.84962ms) Sep 8 23:43:08.827: INFO: (14) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 3.759368ms) Sep 8 23:43:08.827: INFO: (14) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 3.812595ms) Sep 8 23:43:08.827: INFO: (14) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 3.936789ms) Sep 8 23:43:08.827: INFO: (14) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.953677ms) Sep 8 23:43:08.827: INFO: (14) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.937764ms) Sep 8 23:43:08.827: INFO: (14) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 3.993477ms) Sep 8 23:43:08.827: INFO: (14) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 3.993732ms) Sep 8 23:43:08.829: INFO: (15) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 2.333784ms) Sep 8 23:43:08.829: INFO: (15) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 2.288019ms) Sep 8 23:43:08.829: INFO: (15) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 2.536047ms) Sep 8 23:43:08.830: INFO: (15) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 2.752407ms) Sep 8 23:43:08.830: INFO: (15) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 3.019086ms) Sep 8 23:43:08.830: INFO: (15) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 3.047042ms) Sep 8 23:43:08.830: INFO: (15) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 3.419203ms) Sep 8 23:43:08.830: INFO: (15) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 3.58664ms) Sep 8 23:43:08.831: INFO: (15) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 3.68332ms) Sep 8 23:43:08.831: INFO: (15) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.722216ms) Sep 8 23:43:08.831: INFO: (15) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 3.736129ms) Sep 8 23:43:08.831: INFO: (15) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.726491ms) Sep 8 23:43:08.831: INFO: (15) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: ... (200; 1.647579ms) Sep 8 23:43:08.834: INFO: (16) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 3.440877ms) Sep 8 23:43:08.834: INFO: (16) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 3.426723ms) Sep 8 23:43:08.834: INFO: (16) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 3.416153ms) Sep 8 23:43:08.834: INFO: (16) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.422138ms) Sep 8 23:43:08.834: INFO: (16) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: test<... (200; 3.598666ms) Sep 8 23:43:08.834: INFO: (16) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 3.594801ms) Sep 8 23:43:08.834: INFO: (16) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 3.681037ms) Sep 8 23:43:08.835: INFO: (16) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 3.754983ms) Sep 8 23:43:08.835: INFO: (16) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 3.989375ms) Sep 8 23:43:08.835: INFO: (16) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 3.991831ms) Sep 8 23:43:08.835: INFO: (16) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 3.966225ms) Sep 8 23:43:08.838: INFO: (17) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 2.745999ms) Sep 8 23:43:08.838: INFO: (17) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 3.524798ms) Sep 8 23:43:08.838: INFO: (17) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 3.478395ms) Sep 8 23:43:08.838: INFO: (17) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 3.55697ms) Sep 8 23:43:08.838: INFO: (17) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: ... (200; 3.600992ms) Sep 8 23:43:08.839: INFO: (17) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname2/proxy/: bar (200; 4.166856ms) Sep 8 23:43:08.839: INFO: (17) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 4.176745ms) Sep 8 23:43:08.839: INFO: (17) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 4.179231ms) Sep 8 23:43:08.839: INFO: (17) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 4.148892ms) Sep 8 23:43:08.839: INFO: (17) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname1/proxy/: foo (200; 4.152545ms) Sep 8 23:43:08.842: INFO: (18) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: ... (200; 4.505805ms) Sep 8 23:43:08.844: INFO: (18) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 4.592501ms) Sep 8 23:43:08.844: INFO: (18) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 4.477932ms) Sep 8 23:43:08.844: INFO: (18) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 4.467965ms) Sep 8 23:43:08.844: INFO: (18) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname2/proxy/: tls qux (200; 4.554085ms) Sep 8 23:43:08.844: INFO: (18) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 4.539004ms) Sep 8 23:43:08.844: INFO: (18) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 4.55721ms) Sep 8 23:43:08.844: INFO: (18) /api/v1/namespaces/proxy-8396/services/http:proxy-service-dcbvw:portname2/proxy/: bar (200; 4.570357ms) Sep 8 23:43:08.846: INFO: (19) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:160/proxy/: foo (200; 2.417476ms) Sep 8 23:43:08.847: INFO: (19) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz/proxy/: test (200; 2.682756ms) Sep 8 23:43:08.847: INFO: (19) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:462/proxy/: tls qux (200; 2.767659ms) Sep 8 23:43:08.847: INFO: (19) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 2.753032ms) Sep 8 23:43:08.847: INFO: (19) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:460/proxy/: tls baz (200; 3.10968ms) Sep 8 23:43:08.847: INFO: (19) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:162/proxy/: bar (200; 3.244491ms) Sep 8 23:43:08.847: INFO: (19) /api/v1/namespaces/proxy-8396/services/https:proxy-service-dcbvw:tlsportname1/proxy/: tls baz (200; 3.568702ms) Sep 8 23:43:08.847: INFO: (19) /api/v1/namespaces/proxy-8396/pods/http:proxy-service-dcbvw-kf7bz:1080/proxy/: ... (200; 3.592324ms) Sep 8 23:43:08.847: INFO: (19) /api/v1/namespaces/proxy-8396/pods/proxy-service-dcbvw-kf7bz:1080/proxy/: test<... (200; 3.544332ms) Sep 8 23:43:08.847: INFO: (19) /api/v1/namespaces/proxy-8396/services/proxy-service-dcbvw:portname1/proxy/: foo (200; 3.556023ms) Sep 8 23:43:08.847: INFO: (19) /api/v1/namespaces/proxy-8396/pods/https:proxy-service-dcbvw-kf7bz:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Sep 8 23:43:20.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c263ba7f-46ef-496d-beee-a3af31329c41" in namespace "downward-api-4492" to be "success or failure" Sep 8 23:43:20.121: INFO: Pod "downwardapi-volume-c263ba7f-46ef-496d-beee-a3af31329c41": Phase="Pending", Reason="", readiness=false. Elapsed: 25.864979ms Sep 8 23:43:22.125: INFO: Pod "downwardapi-volume-c263ba7f-46ef-496d-beee-a3af31329c41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029686624s Sep 8 23:43:24.129: INFO: Pod "downwardapi-volume-c263ba7f-46ef-496d-beee-a3af31329c41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033778182s STEP: Saw pod success Sep 8 23:43:24.129: INFO: Pod "downwardapi-volume-c263ba7f-46ef-496d-beee-a3af31329c41" satisfied condition "success or failure" Sep 8 23:43:24.131: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c263ba7f-46ef-496d-beee-a3af31329c41 container client-container: STEP: delete the pod Sep 8 23:43:24.147: INFO: Waiting for pod downwardapi-volume-c263ba7f-46ef-496d-beee-a3af31329c41 to disappear Sep 8 23:43:24.151: INFO: Pod downwardapi-volume-c263ba7f-46ef-496d-beee-a3af31329c41 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:43:24.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4492" for this suite. Sep 8 23:43:30.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:43:30.310: INFO: namespace downward-api-4492 deletion completed in 6.155769991s • [SLOW TEST:10.503 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:43:30.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-4ea363ca-4c85-4237-9731-3b84612ad46b STEP: Creating a pod to test consume secrets Sep 8 23:43:30.597: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0dda132b-be52-492a-946a-867ac5fb7c75" in namespace "projected-365" to be "success or failure" Sep 8 23:43:30.612: INFO: Pod "pod-projected-secrets-0dda132b-be52-492a-946a-867ac5fb7c75": Phase="Pending", Reason="", readiness=false. Elapsed: 14.103711ms Sep 8 23:43:32.721: INFO: Pod "pod-projected-secrets-0dda132b-be52-492a-946a-867ac5fb7c75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123915161s Sep 8 23:43:34.725: INFO: Pod "pod-projected-secrets-0dda132b-be52-492a-946a-867ac5fb7c75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127767887s STEP: Saw pod success Sep 8 23:43:34.725: INFO: Pod "pod-projected-secrets-0dda132b-be52-492a-946a-867ac5fb7c75" satisfied condition "success or failure" Sep 8 23:43:34.728: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-0dda132b-be52-492a-946a-867ac5fb7c75 container secret-volume-test: STEP: delete the pod Sep 8 23:43:34.745: INFO: Waiting for pod pod-projected-secrets-0dda132b-be52-492a-946a-867ac5fb7c75 to disappear Sep 8 23:43:34.749: INFO: Pod pod-projected-secrets-0dda132b-be52-492a-946a-867ac5fb7c75 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:43:34.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-365" for this suite. Sep 8 23:43:40.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:43:40.861: INFO: namespace projected-365 deletion completed in 6.108694625s • [SLOW TEST:10.550 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:43:40.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7644 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Sep 8 23:43:40.965: INFO: Found 0 stateful pods, waiting for 3 Sep 8 23:43:50.973: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 8 23:43:50.973: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 8 23:43:50.973: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 8 23:44:00.969: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 8 23:44:00.969: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 8 23:44:00.969: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 8 23:44:00.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7644 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 8 23:44:01.254: INFO: stderr: "I0908 23:44:01.110306 175 log.go:172] (0xc000992420) (0xc0002686e0) Create stream\nI0908 23:44:01.110371 175 log.go:172] (0xc000992420) (0xc0002686e0) Stream added, broadcasting: 1\nI0908 23:44:01.113133 175 log.go:172] (0xc000992420) Reply frame received for 1\nI0908 23:44:01.113175 175 log.go:172] (0xc000992420) (0xc0008e0000) Create stream\nI0908 23:44:01.113189 175 log.go:172] (0xc000992420) (0xc0008e0000) Stream added, broadcasting: 3\nI0908 23:44:01.114257 175 log.go:172] (0xc000992420) Reply frame received for 3\nI0908 23:44:01.114316 175 log.go:172] (0xc000992420) (0xc0008ca000) Create stream\nI0908 23:44:01.114336 175 log.go:172] (0xc000992420) (0xc0008ca000) Stream added, broadcasting: 5\nI0908 23:44:01.115341 175 log.go:172] (0xc000992420) Reply frame received for 5\nI0908 23:44:01.216210 175 log.go:172] (0xc000992420) Data frame received for 5\nI0908 23:44:01.216230 175 log.go:172] (0xc0008ca000) (5) Data frame handling\nI0908 23:44:01.216240 175 log.go:172] (0xc0008ca000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0908 23:44:01.246210 175 log.go:172] (0xc000992420) Data frame received for 3\nI0908 23:44:01.246251 175 log.go:172] (0xc0008e0000) (3) Data frame handling\nI0908 23:44:01.246322 175 log.go:172] (0xc0008e0000) (3) Data frame sent\nI0908 23:44:01.246340 175 log.go:172] (0xc000992420) Data frame received for 3\nI0908 23:44:01.246381 175 log.go:172] (0xc0008e0000) (3) Data frame handling\nI0908 23:44:01.246441 175 log.go:172] (0xc000992420) Data frame received for 5\nI0908 23:44:01.246485 175 log.go:172] (0xc0008ca000) (5) Data frame handling\nI0908 23:44:01.249233 175 log.go:172] (0xc000992420) Data frame received for 1\nI0908 23:44:01.249260 175 log.go:172] (0xc0002686e0) (1) Data frame handling\nI0908 23:44:01.249280 175 log.go:172] (0xc0002686e0) (1) Data frame sent\nI0908 23:44:01.249295 175 log.go:172] (0xc000992420) (0xc0002686e0) Stream removed, broadcasting: 1\nI0908 23:44:01.249311 175 log.go:172] (0xc000992420) Go away received\nI0908 23:44:01.249810 175 log.go:172] (0xc000992420) (0xc0002686e0) Stream removed, broadcasting: 1\nI0908 23:44:01.249837 175 log.go:172] (0xc000992420) (0xc0008e0000) Stream removed, broadcasting: 3\nI0908 23:44:01.249849 175 log.go:172] (0xc000992420) (0xc0008ca000) Stream removed, broadcasting: 5\n" Sep 8 23:44:01.254: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 8 23:44:01.254: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Sep 8 23:44:11.286: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 8 23:44:21.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7644 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 8 23:44:21.539: INFO: stderr: "I0908 23:44:21.449089 195 log.go:172] (0xc000133130) (0xc0009ca820) Create stream\nI0908 23:44:21.449150 195 log.go:172] (0xc000133130) (0xc0009ca820) Stream added, broadcasting: 1\nI0908 23:44:21.453151 195 log.go:172] (0xc000133130) Reply frame received for 1\nI0908 23:44:21.453182 195 log.go:172] (0xc000133130) (0xc0003361e0) Create stream\nI0908 23:44:21.453191 195 log.go:172] (0xc000133130) (0xc0003361e0) Stream added, broadcasting: 3\nI0908 23:44:21.454074 195 log.go:172] (0xc000133130) Reply frame received for 3\nI0908 23:44:21.454116 195 log.go:172] (0xc000133130) (0xc000336320) Create stream\nI0908 23:44:21.454128 195 log.go:172] (0xc000133130) (0xc000336320) Stream added, broadcasting: 5\nI0908 23:44:21.455221 195 log.go:172] (0xc000133130) Reply frame received for 5\nI0908 23:44:21.532253 195 log.go:172] (0xc000133130) Data frame received for 3\nI0908 23:44:21.532284 195 log.go:172] (0xc0003361e0) (3) Data frame handling\nI0908 23:44:21.532298 195 log.go:172] (0xc0003361e0) (3) Data frame sent\nI0908 23:44:21.532309 195 log.go:172] (0xc000133130) Data frame received for 3\nI0908 23:44:21.532321 195 log.go:172] (0xc0003361e0) (3) Data frame handling\nI0908 23:44:21.532335 195 log.go:172] (0xc000133130) Data frame received for 5\nI0908 23:44:21.532346 195 log.go:172] (0xc000336320) (5) Data frame handling\nI0908 23:44:21.532356 195 log.go:172] (0xc000336320) (5) Data frame sent\nI0908 23:44:21.532366 195 log.go:172] (0xc000133130) Data frame received for 5\nI0908 23:44:21.532376 195 log.go:172] (0xc000336320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0908 23:44:21.534185 195 log.go:172] (0xc000133130) Data frame received for 1\nI0908 23:44:21.534219 195 log.go:172] (0xc0009ca820) (1) Data frame handling\nI0908 23:44:21.534239 195 log.go:172] (0xc0009ca820) (1) Data frame sent\nI0908 23:44:21.534255 195 log.go:172] (0xc000133130) (0xc0009ca820) Stream removed, broadcasting: 1\nI0908 23:44:21.534274 195 log.go:172] (0xc000133130) Go away received\nI0908 23:44:21.534798 195 log.go:172] (0xc000133130) (0xc0009ca820) Stream removed, broadcasting: 1\nI0908 23:44:21.534823 195 log.go:172] (0xc000133130) (0xc0003361e0) Stream removed, broadcasting: 3\nI0908 23:44:21.534841 195 log.go:172] (0xc000133130) (0xc000336320) Stream removed, broadcasting: 5\n" Sep 8 23:44:21.539: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 8 23:44:21.539: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 8 23:44:31.559: INFO: Waiting for StatefulSet statefulset-7644/ss2 to complete update Sep 8 23:44:31.559: INFO: Waiting for Pod statefulset-7644/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 8 23:44:31.559: INFO: Waiting for Pod statefulset-7644/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 8 23:44:31.559: INFO: Waiting for Pod statefulset-7644/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 8 23:44:41.568: INFO: Waiting for StatefulSet statefulset-7644/ss2 to complete update Sep 8 23:44:41.568: INFO: Waiting for Pod statefulset-7644/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 8 23:44:41.568: INFO: Waiting for Pod statefulset-7644/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 8 23:44:51.566: INFO: Waiting for StatefulSet statefulset-7644/ss2 to complete update Sep 8 23:44:51.566: INFO: Waiting for Pod statefulset-7644/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Sep 8 23:45:01.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7644 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 8 23:45:01.849: INFO: stderr: "I0908 23:45:01.719917 216 log.go:172] (0xc000ade4d0) (0xc000a64780) Create stream\nI0908 23:45:01.719968 216 log.go:172] (0xc000ade4d0) (0xc000a64780) Stream added, broadcasting: 1\nI0908 23:45:01.723437 216 log.go:172] (0xc000ade4d0) Reply frame received for 1\nI0908 23:45:01.723501 216 log.go:172] (0xc000ade4d0) (0xc000a64000) Create stream\nI0908 23:45:01.723526 216 log.go:172] (0xc000ade4d0) (0xc000a64000) Stream added, broadcasting: 3\nI0908 23:45:01.724825 216 log.go:172] (0xc000ade4d0) Reply frame received for 3\nI0908 23:45:01.724887 216 log.go:172] (0xc000ade4d0) (0xc000800000) Create stream\nI0908 23:45:01.724918 216 log.go:172] (0xc000ade4d0) (0xc000800000) Stream added, broadcasting: 5\nI0908 23:45:01.725981 216 log.go:172] (0xc000ade4d0) Reply frame received for 5\nI0908 23:45:01.814779 216 log.go:172] (0xc000ade4d0) Data frame received for 5\nI0908 23:45:01.814804 216 log.go:172] (0xc000800000) (5) Data frame handling\nI0908 23:45:01.814820 216 log.go:172] (0xc000800000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0908 23:45:01.841231 216 log.go:172] (0xc000ade4d0) Data frame received for 3\nI0908 23:45:01.841267 216 log.go:172] (0xc000a64000) (3) Data frame handling\nI0908 23:45:01.841313 216 log.go:172] (0xc000a64000) (3) Data frame sent\nI0908 23:45:01.841325 216 log.go:172] (0xc000ade4d0) Data frame received for 3\nI0908 23:45:01.841334 216 log.go:172] (0xc000a64000) (3) Data frame handling\nI0908 23:45:01.841624 216 log.go:172] (0xc000ade4d0) Data frame received for 5\nI0908 23:45:01.841653 216 log.go:172] (0xc000800000) (5) Data frame handling\nI0908 23:45:01.843735 216 log.go:172] (0xc000ade4d0) Data frame received for 1\nI0908 23:45:01.843761 216 log.go:172] (0xc000a64780) (1) Data frame handling\nI0908 23:45:01.843788 216 log.go:172] (0xc000a64780) (1) Data frame sent\nI0908 23:45:01.843812 216 log.go:172] (0xc000ade4d0) (0xc000a64780) Stream removed, broadcasting: 1\nI0908 23:45:01.843835 216 log.go:172] (0xc000ade4d0) Go away received\nI0908 23:45:01.844448 216 log.go:172] (0xc000ade4d0) (0xc000a64780) Stream removed, broadcasting: 1\nI0908 23:45:01.844475 216 log.go:172] (0xc000ade4d0) (0xc000a64000) Stream removed, broadcasting: 3\nI0908 23:45:01.844488 216 log.go:172] (0xc000ade4d0) (0xc000800000) Stream removed, broadcasting: 5\n" Sep 8 23:45:01.849: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 8 23:45:01.849: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 8 23:45:11.879: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 8 23:45:21.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7644 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 8 23:45:22.134: INFO: stderr: "I0908 23:45:22.038617 237 log.go:172] (0xc0001326e0) (0xc0008f68c0) Create stream\nI0908 23:45:22.038662 237 log.go:172] (0xc0001326e0) (0xc0008f68c0) Stream added, broadcasting: 1\nI0908 23:45:22.040863 237 log.go:172] (0xc0001326e0) Reply frame received for 1\nI0908 23:45:22.040902 237 log.go:172] (0xc0001326e0) (0xc000664320) Create stream\nI0908 23:45:22.040911 237 log.go:172] (0xc0001326e0) (0xc000664320) Stream added, broadcasting: 3\nI0908 23:45:22.041651 237 log.go:172] (0xc0001326e0) Reply frame received for 3\nI0908 23:45:22.041683 237 log.go:172] (0xc0001326e0) (0xc0008f6960) Create stream\nI0908 23:45:22.041692 237 log.go:172] (0xc0001326e0) (0xc0008f6960) Stream added, broadcasting: 5\nI0908 23:45:22.042661 237 log.go:172] (0xc0001326e0) Reply frame received for 5\nI0908 23:45:22.126664 237 log.go:172] (0xc0001326e0) Data frame received for 5\nI0908 23:45:22.126697 237 log.go:172] (0xc0008f6960) (5) Data frame handling\nI0908 23:45:22.126712 237 log.go:172] (0xc0008f6960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0908 23:45:22.126732 237 log.go:172] (0xc0001326e0) Data frame received for 3\nI0908 23:45:22.126743 237 log.go:172] (0xc000664320) (3) Data frame handling\nI0908 23:45:22.126754 237 log.go:172] (0xc000664320) (3) Data frame sent\nI0908 23:45:22.126764 237 log.go:172] (0xc0001326e0) Data frame received for 3\nI0908 23:45:22.126774 237 log.go:172] (0xc000664320) (3) Data frame handling\nI0908 23:45:22.126817 237 log.go:172] (0xc0001326e0) Data frame received for 5\nI0908 23:45:22.126849 237 log.go:172] (0xc0008f6960) (5) Data frame handling\nI0908 23:45:22.128583 237 log.go:172] (0xc0001326e0) Data frame received for 1\nI0908 23:45:22.128606 237 log.go:172] (0xc0008f68c0) (1) Data frame handling\nI0908 23:45:22.128625 237 log.go:172] (0xc0008f68c0) (1) Data frame sent\nI0908 23:45:22.128641 237 log.go:172] (0xc0001326e0) (0xc0008f68c0) Stream removed, broadcasting: 1\nI0908 23:45:22.128714 237 log.go:172] (0xc0001326e0) Go away received\nI0908 23:45:22.129915 237 log.go:172] (0xc0001326e0) (0xc0008f68c0) Stream removed, broadcasting: 1\nI0908 23:45:22.130056 237 log.go:172] (0xc0001326e0) (0xc000664320) Stream removed, broadcasting: 3\nI0908 23:45:22.130166 237 log.go:172] (0xc0001326e0) (0xc0008f6960) Stream removed, broadcasting: 5\n" Sep 8 23:45:22.134: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 8 23:45:22.134: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 8 23:45:32.161: INFO: Waiting for StatefulSet statefulset-7644/ss2 to complete update Sep 8 23:45:32.161: INFO: Waiting for Pod statefulset-7644/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 8 23:45:32.161: INFO: Waiting for Pod statefulset-7644/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 8 23:45:32.161: INFO: Waiting for Pod statefulset-7644/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 8 23:45:42.168: INFO: Waiting for StatefulSet statefulset-7644/ss2 to complete update Sep 8 23:45:42.169: INFO: Waiting for Pod statefulset-7644/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 8 23:45:42.169: INFO: Waiting for Pod statefulset-7644/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 8 23:45:52.169: INFO: Waiting for StatefulSet statefulset-7644/ss2 to complete update Sep 8 23:45:52.169: INFO: Waiting for Pod statefulset-7644/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Sep 8 23:46:02.168: INFO: Deleting all statefulset in ns statefulset-7644 Sep 8 23:46:02.171: INFO: Scaling statefulset ss2 to 0 Sep 8 23:46:32.199: INFO: Waiting for statefulset status.replicas updated to 0 Sep 8 23:46:32.202: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:46:32.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7644" for this suite. Sep 8 23:46:38.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:46:38.311: INFO: namespace statefulset-7644 deletion completed in 6.089382878s • [SLOW TEST:177.449 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:46:38.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Sep 8 23:46:38.406: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2150,SelfLink:/api/v1/namespaces/watch-2150/configmaps/e2e-watch-test-label-changed,UID:8c0645bd-45ea-48d8-85e3-31a740d743cf,ResourceVersion:310925,Generation:0,CreationTimestamp:2020-09-08 23:46:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Sep 8 23:46:38.407: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2150,SelfLink:/api/v1/namespaces/watch-2150/configmaps/e2e-watch-test-label-changed,UID:8c0645bd-45ea-48d8-85e3-31a740d743cf,ResourceVersion:310926,Generation:0,CreationTimestamp:2020-09-08 23:46:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Sep 8 23:46:38.407: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2150,SelfLink:/api/v1/namespaces/watch-2150/configmaps/e2e-watch-test-label-changed,UID:8c0645bd-45ea-48d8-85e3-31a740d743cf,ResourceVersion:310927,Generation:0,CreationTimestamp:2020-09-08 23:46:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Sep 8 23:46:48.470: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2150,SelfLink:/api/v1/namespaces/watch-2150/configmaps/e2e-watch-test-label-changed,UID:8c0645bd-45ea-48d8-85e3-31a740d743cf,ResourceVersion:310981,Generation:0,CreationTimestamp:2020-09-08 23:46:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Sep 8 23:46:48.470: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2150,SelfLink:/api/v1/namespaces/watch-2150/configmaps/e2e-watch-test-label-changed,UID:8c0645bd-45ea-48d8-85e3-31a740d743cf,ResourceVersion:310982,Generation:0,CreationTimestamp:2020-09-08 23:46:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Sep 8 23:46:48.470: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2150,SelfLink:/api/v1/namespaces/watch-2150/configmaps/e2e-watch-test-label-changed,UID:8c0645bd-45ea-48d8-85e3-31a740d743cf,ResourceVersion:310983,Generation:0,CreationTimestamp:2020-09-08 23:46:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:46:48.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2150" for this suite. Sep 8 23:46:54.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:46:54.587: INFO: namespace watch-2150 deletion completed in 6.099235704s • [SLOW TEST:16.276 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:46:54.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 8 23:46:54.649: INFO: Waiting up to 5m0s for pod "pod-4d057ffd-9949-4213-b1b2-45fb109bd5bb" in namespace "emptydir-2690" to be "success or failure" Sep 8 23:46:54.653: INFO: Pod "pod-4d057ffd-9949-4213-b1b2-45fb109bd5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.762879ms Sep 8 23:46:56.657: INFO: Pod "pod-4d057ffd-9949-4213-b1b2-45fb109bd5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008033249s Sep 8 23:46:58.662: INFO: Pod "pod-4d057ffd-9949-4213-b1b2-45fb109bd5bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012405163s STEP: Saw pod success Sep 8 23:46:58.662: INFO: Pod "pod-4d057ffd-9949-4213-b1b2-45fb109bd5bb" satisfied condition "success or failure" Sep 8 23:46:58.665: INFO: Trying to get logs from node iruya-worker pod pod-4d057ffd-9949-4213-b1b2-45fb109bd5bb container test-container: STEP: delete the pod Sep 8 23:46:58.724: INFO: Waiting for pod pod-4d057ffd-9949-4213-b1b2-45fb109bd5bb to disappear Sep 8 23:46:58.759: INFO: Pod pod-4d057ffd-9949-4213-b1b2-45fb109bd5bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:46:58.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2690" for this suite. Sep 8 23:47:04.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:47:04.851: INFO: namespace emptydir-2690 deletion completed in 6.08883483s • [SLOW TEST:10.263 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:47:04.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-q5lx STEP: Creating a pod to test atomic-volume-subpath Sep 8 23:47:04.937: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q5lx" in namespace "subpath-9952" to be "success or failure" Sep 8 23:47:04.989: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Pending", Reason="", readiness=false. Elapsed: 51.599286ms Sep 8 23:47:06.992: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055439997s Sep 8 23:47:08.996: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Running", Reason="", readiness=true. Elapsed: 4.059153415s Sep 8 23:47:11.001: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Running", Reason="", readiness=true. Elapsed: 6.063540375s Sep 8 23:47:13.004: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Running", Reason="", readiness=true. Elapsed: 8.067168995s Sep 8 23:47:15.008: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Running", Reason="", readiness=true. Elapsed: 10.071178214s Sep 8 23:47:17.012: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Running", Reason="", readiness=true. Elapsed: 12.075083009s Sep 8 23:47:19.016: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Running", Reason="", readiness=true. Elapsed: 14.079351558s Sep 8 23:47:21.021: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Running", Reason="", readiness=true. Elapsed: 16.083568094s Sep 8 23:47:23.025: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Running", Reason="", readiness=true. Elapsed: 18.087811831s Sep 8 23:47:25.028: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Running", Reason="", readiness=true. Elapsed: 20.091403015s Sep 8 23:47:27.033: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Running", Reason="", readiness=true. Elapsed: 22.095514358s Sep 8 23:47:29.155: INFO: Pod "pod-subpath-test-configmap-q5lx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.21795516s STEP: Saw pod success Sep 8 23:47:29.155: INFO: Pod "pod-subpath-test-configmap-q5lx" satisfied condition "success or failure" Sep 8 23:47:29.158: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-q5lx container test-container-subpath-configmap-q5lx: STEP: delete the pod Sep 8 23:47:29.184: INFO: Waiting for pod pod-subpath-test-configmap-q5lx to disappear Sep 8 23:47:29.186: INFO: Pod pod-subpath-test-configmap-q5lx no longer exists STEP: Deleting pod pod-subpath-test-configmap-q5lx Sep 8 23:47:29.186: INFO: Deleting pod "pod-subpath-test-configmap-q5lx" in namespace "subpath-9952" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:47:29.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9952" for this suite. Sep 8 23:47:35.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:47:35.292: INFO: namespace subpath-9952 deletion completed in 6.10067835s • [SLOW TEST:30.441 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:47:35.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 8 23:47:35.410: INFO: Waiting up to 5m0s for pod "pod-2f0866dc-b9bf-4081-9395-45fe783b1f72" in namespace "emptydir-7147" to be "success or failure" Sep 8 23:47:35.414: INFO: Pod "pod-2f0866dc-b9bf-4081-9395-45fe783b1f72": Phase="Pending", Reason="", readiness=false. Elapsed: 3.572809ms Sep 8 23:47:37.418: INFO: Pod "pod-2f0866dc-b9bf-4081-9395-45fe783b1f72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007824419s Sep 8 23:47:39.422: INFO: Pod "pod-2f0866dc-b9bf-4081-9395-45fe783b1f72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011309612s STEP: Saw pod success Sep 8 23:47:39.422: INFO: Pod "pod-2f0866dc-b9bf-4081-9395-45fe783b1f72" satisfied condition "success or failure" Sep 8 23:47:39.424: INFO: Trying to get logs from node iruya-worker2 pod pod-2f0866dc-b9bf-4081-9395-45fe783b1f72 container test-container: STEP: delete the pod Sep 8 23:47:39.439: INFO: Waiting for pod pod-2f0866dc-b9bf-4081-9395-45fe783b1f72 to disappear Sep 8 23:47:39.444: INFO: Pod pod-2f0866dc-b9bf-4081-9395-45fe783b1f72 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:47:39.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7147" for this suite. Sep 8 23:47:45.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:47:45.534: INFO: namespace emptydir-7147 deletion completed in 6.087178851s • [SLOW TEST:10.242 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:47:45.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-d6f6c9de-0e59-4949-8ab2-b2435c375e27 in namespace container-probe-6639 Sep 8 23:47:49.666: INFO: Started pod liveness-d6f6c9de-0e59-4949-8ab2-b2435c375e27 in namespace container-probe-6639 STEP: checking the pod's current state and verifying that restartCount is present Sep 8 23:47:49.669: INFO: Initial restart count of pod liveness-d6f6c9de-0e59-4949-8ab2-b2435c375e27 is 0 Sep 8 23:48:09.715: INFO: Restart count of pod container-probe-6639/liveness-d6f6c9de-0e59-4949-8ab2-b2435c375e27 is now 1 (20.046807626s elapsed) Sep 8 23:48:29.837: INFO: Restart count of pod container-probe-6639/liveness-d6f6c9de-0e59-4949-8ab2-b2435c375e27 is now 2 (40.167954315s elapsed) Sep 8 23:48:49.878: INFO: Restart count of pod container-probe-6639/liveness-d6f6c9de-0e59-4949-8ab2-b2435c375e27 is now 3 (1m0.209229138s elapsed) Sep 8 23:49:09.964: INFO: Restart count of pod container-probe-6639/liveness-d6f6c9de-0e59-4949-8ab2-b2435c375e27 is now 4 (1m20.295285767s elapsed) Sep 8 23:50:08.397: INFO: Restart count of pod container-probe-6639/liveness-d6f6c9de-0e59-4949-8ab2-b2435c375e27 is now 5 (2m18.728787782s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:50:08.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6639" for this suite. Sep 8 23:50:14.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:50:14.511: INFO: namespace container-probe-6639 deletion completed in 6.097436204s • [SLOW TEST:148.976 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:50:14.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Sep 8 23:50:15.117: INFO: created pod pod-service-account-defaultsa Sep 8 23:50:15.117: INFO: pod pod-service-account-defaultsa service account token volume mount: true Sep 8 23:50:15.124: INFO: created pod pod-service-account-mountsa Sep 8 23:50:15.124: INFO: pod pod-service-account-mountsa service account token volume mount: true Sep 8 23:50:15.130: INFO: created pod pod-service-account-nomountsa Sep 8 23:50:15.130: INFO: pod pod-service-account-nomountsa service account token volume mount: false Sep 8 23:50:15.150: INFO: created pod pod-service-account-defaultsa-mountspec Sep 8 23:50:15.150: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Sep 8 23:50:15.186: INFO: created pod pod-service-account-mountsa-mountspec Sep 8 23:50:15.186: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Sep 8 23:50:15.199: INFO: created pod pod-service-account-nomountsa-mountspec Sep 8 23:50:15.199: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Sep 8 23:50:15.272: INFO: created pod pod-service-account-defaultsa-nomountspec Sep 8 23:50:15.272: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Sep 8 23:50:15.301: INFO: created pod pod-service-account-mountsa-nomountspec Sep 8 23:50:15.301: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Sep 8 23:50:15.357: INFO: created pod pod-service-account-nomountsa-nomountspec Sep 8 23:50:15.357: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:50:15.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3936" for this suite. Sep 8 23:50:45.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:50:45.623: INFO: namespace svcaccounts-3936 deletion completed in 30.186584751s • [SLOW TEST:31.111 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:50:45.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Sep 8 23:50:45.774: INFO: Waiting up to 5m0s for pod "downward-api-a4155e74-0922-4e30-af07-3b0c992d3903" in namespace "downward-api-8547" to be "success or failure" Sep 8 23:50:45.802: INFO: Pod "downward-api-a4155e74-0922-4e30-af07-3b0c992d3903": Phase="Pending", Reason="", readiness=false. Elapsed: 28.048168ms Sep 8 23:50:47.917: INFO: Pod "downward-api-a4155e74-0922-4e30-af07-3b0c992d3903": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142997632s Sep 8 23:50:49.920: INFO: Pod "downward-api-a4155e74-0922-4e30-af07-3b0c992d3903": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145578337s STEP: Saw pod success Sep 8 23:50:49.920: INFO: Pod "downward-api-a4155e74-0922-4e30-af07-3b0c992d3903" satisfied condition "success or failure" Sep 8 23:50:49.923: INFO: Trying to get logs from node iruya-worker2 pod downward-api-a4155e74-0922-4e30-af07-3b0c992d3903 container dapi-container: STEP: delete the pod Sep 8 23:50:49.999: INFO: Waiting for pod downward-api-a4155e74-0922-4e30-af07-3b0c992d3903 to disappear Sep 8 23:50:50.012: INFO: Pod downward-api-a4155e74-0922-4e30-af07-3b0c992d3903 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:50:50.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8547" for this suite. Sep 8 23:50:56.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:50:56.133: INFO: namespace downward-api-8547 deletion completed in 6.116472286s • [SLOW TEST:10.510 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:50:56.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:51:00.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8983" for this suite. Sep 8 23:51:46.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:51:46.405: INFO: namespace kubelet-test-8983 deletion completed in 46.092334016s • [SLOW TEST:50.272 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:51:46.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-c95e4ea3-07d9-4eb6-b58e-901f3deec172 Sep 8 23:51:46.561: INFO: Pod name my-hostname-basic-c95e4ea3-07d9-4eb6-b58e-901f3deec172: Found 0 pods out of 1 Sep 8 23:51:51.565: INFO: Pod name my-hostname-basic-c95e4ea3-07d9-4eb6-b58e-901f3deec172: Found 1 pods out of 1 Sep 8 23:51:51.565: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c95e4ea3-07d9-4eb6-b58e-901f3deec172" are running Sep 8 23:51:51.568: INFO: Pod "my-hostname-basic-c95e4ea3-07d9-4eb6-b58e-901f3deec172-rntl2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-08 23:51:46 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-08 23:51:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-08 23:51:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-08 23:51:46 +0000 UTC Reason: Message:}]) Sep 8 23:51:51.568: INFO: Trying to dial the pod Sep 8 23:51:56.580: INFO: Controller my-hostname-basic-c95e4ea3-07d9-4eb6-b58e-901f3deec172: Got expected result from replica 1 [my-hostname-basic-c95e4ea3-07d9-4eb6-b58e-901f3deec172-rntl2]: "my-hostname-basic-c95e4ea3-07d9-4eb6-b58e-901f3deec172-rntl2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:51:56.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9791" for this suite. Sep 8 23:52:02.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:52:02.684: INFO: namespace replication-controller-9791 deletion completed in 6.101193945s • [SLOW TEST:16.279 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:52:02.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Sep 8 23:52:03.164: INFO: Creating deployment "nginx-deployment" Sep 8 23:52:03.205: INFO: Waiting for observed generation 1 Sep 8 23:52:05.259: INFO: Waiting for all required pods to come up Sep 8 23:52:05.264: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 8 23:52:17.302: INFO: Waiting for deployment "nginx-deployment" to complete Sep 8 23:52:17.307: INFO: Updating deployment "nginx-deployment" with a non-existent image Sep 8 23:52:17.314: INFO: Updating deployment nginx-deployment Sep 8 23:52:17.314: INFO: Waiting for observed generation 2 Sep 8 23:52:19.403: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 8 23:52:19.406: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 8 23:52:19.408: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Sep 8 23:52:19.415: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 8 23:52:19.415: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 8 23:52:19.416: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Sep 8 23:52:19.419: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Sep 8 23:52:19.419: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Sep 8 23:52:19.424: INFO: Updating deployment nginx-deployment Sep 8 23:52:19.424: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Sep 8 23:52:19.679: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 8 23:52:20.039: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Sep 8 23:52:20.554: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2396,SelfLink:/apis/apps/v1/namespaces/deployment-2396/deployments/nginx-deployment,UID:ee7beb2b-915d-461f-87f2-e92ecbdc5e8b,ResourceVersion:312586,Generation:3,CreationTimestamp:2020-09-08 23:52:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-09-08 23:52:17 +0000 UTC 2020-09-08 23:52:03 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-09-08 23:52:19 +0000 UTC 2020-09-08 23:52:19 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Sep 8 23:52:21.326: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2396,SelfLink:/apis/apps/v1/namespaces/deployment-2396/replicasets/nginx-deployment-55fb7cb77f,UID:e398a7ed-f035-42ba-8347-e7404225d8bc,ResourceVersion:312622,Generation:3,CreationTimestamp:2020-09-08 23:52:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ee7beb2b-915d-461f-87f2-e92ecbdc5e8b 0xc003e3ebf7 0xc003e3ebf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Sep 8 23:52:21.326: INFO: All old ReplicaSets of Deployment "nginx-deployment": Sep 8 23:52:21.326: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2396,SelfLink:/apis/apps/v1/namespaces/deployment-2396/replicasets/nginx-deployment-7b8c6f4498,UID:580280f1-e79c-42ce-ac08-cec8c5a2e46a,ResourceVersion:312620,Generation:3,CreationTimestamp:2020-09-08 23:52:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ee7beb2b-915d-461f-87f2-e92ecbdc5e8b 0xc003e3ed67 0xc003e3ed68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Sep 8 23:52:21.588: INFO: Pod "nginx-deployment-55fb7cb77f-6vdwn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6vdwn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-6vdwn,UID:fb281856-6e7f-42aa-a2f9-fac2030ad5f7,ResourceVersion:312525,Generation:0,CreationTimestamp:2020-09-08 23:52:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc003e3fde7 0xc003e3fde8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003e3ff00} {node.kubernetes.io/unreachable Exists NoExecute 0xc003e3ff20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-09-08 23:52:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.589: INFO: Pod "nginx-deployment-55fb7cb77f-7j7lg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7j7lg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-7j7lg,UID:3ced535f-a225-41dc-8be7-c06be9956c2e,ResourceVersion:312589,Generation:0,CreationTimestamp:2020-09-08 23:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc003e3fff0 0xc003e3fff1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6070} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.589: INFO: Pod "nginx-deployment-55fb7cb77f-bx9cz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bx9cz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-bx9cz,UID:b5afec41-2179-4cf3-9a64-60ed558f3a02,ResourceVersion:312538,Generation:0,CreationTimestamp:2020-09-08 23:52:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c6110 0xc0022c6111}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6190} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c61b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-09-08 23:52:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.589: INFO: Pod "nginx-deployment-55fb7cb77f-db8mz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-db8mz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-db8mz,UID:76016503-611d-4bef-a91c-36ef8238a010,ResourceVersion:312619,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c6280 0xc0022c6281}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6300} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.589: INFO: Pod "nginx-deployment-55fb7cb77f-f9hkr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f9hkr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-f9hkr,UID:9d5c6676-d874-45b4-b0d5-bce11d2abfc7,ResourceVersion:312625,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c63a0 0xc0022c63a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6420} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.589: INFO: Pod "nginx-deployment-55fb7cb77f-jg7wt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jg7wt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-jg7wt,UID:ac319c91-3891-4d5e-b75f-07548d899faf,ResourceVersion:312553,Generation:0,CreationTimestamp:2020-09-08 23:52:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c64c0 0xc0022c64c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6540} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-09-08 23:52:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.589: INFO: Pod "nginx-deployment-55fb7cb77f-kxb6x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kxb6x,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-kxb6x,UID:75223641-b20b-4cdb-94c6-758915b37655,ResourceVersion:312610,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c6630 0xc0022c6631}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c66b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c66d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.590: INFO: Pod "nginx-deployment-55fb7cb77f-l2c78" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l2c78,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-l2c78,UID:b358feb6-31bf-4bb5-ab04-5f0f7b9879f1,ResourceVersion:312591,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c6750 0xc0022c6751}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c67d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c67f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.590: INFO: Pod "nginx-deployment-55fb7cb77f-tqbkj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tqbkj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-tqbkj,UID:c8e73a87-6314-43ac-b33c-aceab8157cc2,ResourceVersion:312609,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c6870 0xc0022c6871}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c68f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.590: INFO: Pod "nginx-deployment-55fb7cb77f-vgrw8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vgrw8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-vgrw8,UID:2c54e922-18f2-4365-b7dd-0e5b162f7b6c,ResourceVersion:312616,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c6990 0xc0022c6991}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.590: INFO: Pod "nginx-deployment-55fb7cb77f-wf4wc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wf4wc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-wf4wc,UID:79dcd961-705b-4725-b7c1-16ea4bd1887f,ResourceVersion:312606,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c6ab0 0xc0022c6ab1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6b30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.590: INFO: Pod "nginx-deployment-55fb7cb77f-whdlk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-whdlk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-whdlk,UID:0ca02127-5051-4b3f-b53f-fa931e2a2bb9,ResourceVersion:312551,Generation:0,CreationTimestamp:2020-09-08 23:52:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c6bd0 0xc0022c6bd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-09-08 23:52:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.590: INFO: Pod "nginx-deployment-55fb7cb77f-xrs54" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xrs54,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-55fb7cb77f-xrs54,UID:91509ca6-25ac-41c6-b2e7-25f9c0aa45ef,ResourceVersion:312524,Generation:0,CreationTimestamp:2020-09-08 23:52:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e398a7ed-f035-42ba-8347-e7404225d8bc 0xc0022c6d40 0xc0022c6d41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-09-08 23:52:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.590: INFO: Pod "nginx-deployment-7b8c6f4498-b8dld" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b8dld,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-b8dld,UID:b5819844-ebda-41ed-ac46-e3d55a56823c,ResourceVersion:312446,Generation:0,CreationTimestamp:2020-09-08 23:52:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c6ec0 0xc0022c6ec1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.245,StartTime:2020-09-08 23:52:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-08 23:52:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f35d88d60cbfc819bad131e0a4b5db0b4450a55fc5ee85d08e6f63d91ed36ae6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.591: INFO: Pod "nginx-deployment-7b8c6f4498-bmn7f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bmn7f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-bmn7f,UID:d1b56ce7-76f9-4397-89b7-fd1efb822b39,ResourceVersion:312594,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c7020 0xc0022c7021}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7090} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c70b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.591: INFO: Pod "nginx-deployment-7b8c6f4498-bzrnz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bzrnz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-bzrnz,UID:44276e57-7587-4ac5-b4b4-9458cfe93378,ResourceVersion:312640,Generation:0,CreationTimestamp:2020-09-08 23:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c7130 0xc0022c7131}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c71a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c71c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-09-08 23:52:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.591: INFO: Pod "nginx-deployment-7b8c6f4498-dtz8t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dtz8t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-dtz8t,UID:6381b00d-546b-4f91-a751-86706ea5004a,ResourceVersion:312435,Generation:0,CreationTimestamp:2020-09-08 23:52:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c7280 0xc0022c7281}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c72f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.244,StartTime:2020-09-08 23:52:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-08 23:52:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://293b9c792968ace090ab6360eb9b67f3d33b1c6608dbe507b7a817e2418d932f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.591: INFO: Pod "nginx-deployment-7b8c6f4498-fshws" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fshws,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-fshws,UID:8ba5809d-9343-45c1-b88b-8271410c95a6,ResourceVersion:312487,Generation:0,CreationTimestamp:2020-09-08 23:52:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c7420 0xc0022c7421}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7490} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c74b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.53,StartTime:2020-09-08 23:52:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-08 23:52:14 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d067c87dcf8eabdd89b6f6d514913a581ed2923c95185d4edadb761f09a5214e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.591: INFO: Pod "nginx-deployment-7b8c6f4498-ggf9s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ggf9s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-ggf9s,UID:96b37730-d04c-44e1-877e-0cbfa238b7f0,ResourceVersion:312469,Generation:0,CreationTimestamp:2020-09-08 23:52:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c7580 0xc0022c7581}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c75f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.51,StartTime:2020-09-08 23:52:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-08 23:52:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b7125a904d19fd66ea4037cc488969bba7ff174cf959493d359269a1db889a87}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.591: INFO: Pod "nginx-deployment-7b8c6f4498-ghxrg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ghxrg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-ghxrg,UID:e3c734d3-1640-43cb-9dc2-766134bd2266,ResourceVersion:312613,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c76e0 0xc0022c76e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7750} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.591: INFO: Pod "nginx-deployment-7b8c6f4498-hldtp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hldtp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-hldtp,UID:8e604740-f1c8-4cb9-a32e-d11925488a20,ResourceVersion:312470,Generation:0,CreationTimestamp:2020-09-08 23:52:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c77f0 0xc0022c77f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7860} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.248,StartTime:2020-09-08 23:52:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-08 23:52:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://abf6f50183729729a90a1929a610dade79f32f5e708b77d959b7462d6bf621b5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.592: INFO: Pod "nginx-deployment-7b8c6f4498-km4pn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-km4pn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-km4pn,UID:af1e8a98-07ad-4402-a1d0-7995e7cba2c5,ResourceVersion:312595,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c7970 0xc0022c7971}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c79e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.592: INFO: Pod "nginx-deployment-7b8c6f4498-lcg57" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lcg57,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-lcg57,UID:0acabe10-f765-4bb6-94f5-f82b21561d9e,ResourceVersion:312635,Generation:0,CreationTimestamp:2020-09-08 23:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c7a80 0xc0022c7a81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7af0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-09-08 23:52:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.592: INFO: Pod "nginx-deployment-7b8c6f4498-lcl86" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lcl86,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-lcl86,UID:01f6206b-143d-45e4-8853-7486aba7a584,ResourceVersion:312614,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0022c7bd0 0xc0022c7bd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.592: INFO: Pod "nginx-deployment-7b8c6f4498-mbqfl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mbqfl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-mbqfl,UID:52267922-67e8-425d-99e8-665993044da1,ResourceVersion:312605,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0026bc040 0xc0026bc041}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026bc0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026bc0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.592: INFO: Pod "nginx-deployment-7b8c6f4498-p992l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p992l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-p992l,UID:079dc4bf-0913-40ba-ad74-146f6a4b8433,ResourceVersion:312612,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0026bc170 0xc0026bc171}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026bc1e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026bc200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.592: INFO: Pod "nginx-deployment-7b8c6f4498-pg24n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pg24n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-pg24n,UID:f9dc41b9-d56a-4957-9225-98aa00d04f6a,ResourceVersion:312456,Generation:0,CreationTimestamp:2020-09-08 23:52:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0026bc280 0xc0026bc281}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026bc2f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026bc310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.246,StartTime:2020-09-08 23:52:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-08 23:52:12 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9dccceeeb7f40afc5281aab6d7c29d20da52a82f37851e39202fe788c598b374}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.593: INFO: Pod "nginx-deployment-7b8c6f4498-qwc5m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qwc5m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-qwc5m,UID:a68337a4-231d-4a30-a080-063e4702a540,ResourceVersion:312621,Generation:0,CreationTimestamp:2020-09-08 23:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0026bc3e0 0xc0026bc3e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026bc450} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026bc470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:19 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-09-08 23:52:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.593: INFO: Pod "nginx-deployment-7b8c6f4498-qzpzm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qzpzm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-qzpzm,UID:a071211a-a63b-43db-af62-2636d8c44fbf,ResourceVersion:312440,Generation:0,CreationTimestamp:2020-09-08 23:52:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0026bc560 0xc0026bc561}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026bc5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026bc5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.50,StartTime:2020-09-08 23:52:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-08 23:52:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d72b385112c8fbf20762f592a414bd2f5356e2e6a933023cf1a08245ffb5a185}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.593: INFO: Pod "nginx-deployment-7b8c6f4498-sxtpx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sxtpx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-sxtpx,UID:cb475ddd-4903-4b63-ae0c-dbd6d921ec80,ResourceVersion:312618,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0026bc6c0 0xc0026bc6c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026bc740} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026bc760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.593: INFO: Pod "nginx-deployment-7b8c6f4498-t2wkd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t2wkd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-t2wkd,UID:8d084f47-d8f2-44a6-bd85-68d6dddd8a4a,ResourceVersion:312607,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0026bc7e0 0xc0026bc7e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026bc850} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026bc870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.593: INFO: Pod "nginx-deployment-7b8c6f4498-v7qjj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v7qjj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-v7qjj,UID:938df39d-c01d-47e7-ac43-39d4287a7b7d,ResourceVersion:312484,Generation:0,CreationTimestamp:2020-09-08 23:52:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0026bc8f0 0xc0026bc8f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026bc960} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026bc980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.52,StartTime:2020-09-08 23:52:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-08 23:52:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://72a52debd4bc43fa41be6a1dd99712070c2ec014b250d93c656d5751df487595}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 8 23:52:21.593: INFO: Pod "nginx-deployment-7b8c6f4498-vgtnk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vgtnk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2396,SelfLink:/api/v1/namespaces/deployment-2396/pods/nginx-deployment-7b8c6f4498-vgtnk,UID:996459ee-6c34-4abb-9d66-a4ca1d0e90ad,ResourceVersion:312617,Generation:0,CreationTimestamp:2020-09-08 23:52:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 580280f1-e79c-42ce-ac08-cec8c5a2e46a 0xc0026bca50 0xc0026bca51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kb4vg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kb4vg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kb4vg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026bcac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026bcae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-08 23:52:20 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:52:21.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2396" for this suite. Sep 8 23:52:45.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:52:46.080: INFO: namespace deployment-2396 deletion completed in 24.307689533s • [SLOW TEST:43.395 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:52:46.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Sep 8 23:52:46.143: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Sep 8 23:52:46.674: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Sep 8 23:52:49.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735205966, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735205966, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735205966, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735205966, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 8 23:52:51.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735205966, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735205966, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735205966, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735205966, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 8 23:52:53.838: INFO: Waited 625.644175ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:52:55.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2704" for this suite. Sep 8 23:53:01.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:53:01.579: INFO: namespace aggregator-2704 deletion completed in 6.368797538s • [SLOW TEST:15.498 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:53:01.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:53:27.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9681" for this suite. Sep 8 23:53:33.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:53:33.875: INFO: namespace namespaces-9681 deletion completed in 6.088695386s STEP: Destroying namespace "nsdeletetest-6673" for this suite. Sep 8 23:53:33.877: INFO: Namespace nsdeletetest-6673 was already deleted STEP: Destroying namespace "nsdeletetest-7887" for this suite. Sep 8 23:53:39.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:53:40.033: INFO: namespace nsdeletetest-7887 deletion completed in 6.155069057s • [SLOW TEST:38.453 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:53:40.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4274 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 8 23:53:40.130: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 8 23:54:02.350: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.13:8080/dial?request=hostName&protocol=http&host=10.244.1.12&port=8080&tries=1'] Namespace:pod-network-test-4274 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 8 23:54:02.350: INFO: >>> kubeConfig: /root/.kube/config I0908 23:54:02.391587 6 log.go:172] (0xc000b08e70) (0xc0023fe000) Create stream I0908 23:54:02.391626 6 log.go:172] (0xc000b08e70) (0xc0023fe000) Stream added, broadcasting: 1 I0908 23:54:02.395426 6 log.go:172] (0xc000b08e70) Reply frame received for 1 I0908 23:54:02.395499 6 log.go:172] (0xc000b08e70) (0xc001b24640) Create stream I0908 23:54:02.395539 6 log.go:172] (0xc000b08e70) (0xc001b24640) Stream added, broadcasting: 3 I0908 23:54:02.397040 6 log.go:172] (0xc000b08e70) Reply frame received for 3 I0908 23:54:02.397099 6 log.go:172] (0xc000b08e70) (0xc002f1a3c0) Create stream I0908 23:54:02.397136 6 log.go:172] (0xc000b08e70) (0xc002f1a3c0) Stream added, broadcasting: 5 I0908 23:54:02.398237 6 log.go:172] (0xc000b08e70) Reply frame received for 5 I0908 23:54:02.488384 6 log.go:172] (0xc000b08e70) Data frame received for 3 I0908 23:54:02.488415 6 log.go:172] (0xc001b24640) (3) Data frame handling I0908 23:54:02.488432 6 log.go:172] (0xc001b24640) (3) Data frame sent I0908 23:54:02.489142 6 log.go:172] (0xc000b08e70) Data frame received for 5 I0908 23:54:02.489173 6 log.go:172] (0xc002f1a3c0) (5) Data frame handling I0908 23:54:02.489190 6 log.go:172] (0xc000b08e70) Data frame received for 3 I0908 23:54:02.489198 6 log.go:172] (0xc001b24640) (3) Data frame handling I0908 23:54:02.491056 6 log.go:172] (0xc000b08e70) Data frame received for 1 I0908 23:54:02.491078 6 log.go:172] (0xc0023fe000) (1) Data frame handling I0908 23:54:02.491087 6 log.go:172] (0xc0023fe000) (1) Data frame sent I0908 23:54:02.491098 6 log.go:172] (0xc000b08e70) (0xc0023fe000) Stream removed, broadcasting: 1 I0908 23:54:02.491134 6 log.go:172] (0xc000b08e70) Go away received I0908 23:54:02.491197 6 log.go:172] (0xc000b08e70) (0xc0023fe000) Stream removed, broadcasting: 1 I0908 23:54:02.491214 6 log.go:172] (0xc000b08e70) (0xc001b24640) Stream removed, broadcasting: 3 I0908 23:54:02.491225 6 log.go:172] (0xc000b08e70) (0xc002f1a3c0) Stream removed, broadcasting: 5 Sep 8 23:54:02.491: INFO: Waiting for endpoints: map[] Sep 8 23:54:02.494: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.13:8080/dial?request=hostName&protocol=http&host=10.244.2.71&port=8080&tries=1'] Namespace:pod-network-test-4274 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 8 23:54:02.494: INFO: >>> kubeConfig: /root/.kube/config I0908 23:54:02.525786 6 log.go:172] (0xc001461550) (0xc001b24aa0) Create stream I0908 23:54:02.525826 6 log.go:172] (0xc001461550) (0xc001b24aa0) Stream added, broadcasting: 1 I0908 23:54:02.530737 6 log.go:172] (0xc001461550) Reply frame received for 1 I0908 23:54:02.530842 6 log.go:172] (0xc001461550) (0xc0023fe0a0) Create stream I0908 23:54:02.530904 6 log.go:172] (0xc001461550) (0xc0023fe0a0) Stream added, broadcasting: 3 I0908 23:54:02.532599 6 log.go:172] (0xc001461550) Reply frame received for 3 I0908 23:54:02.532636 6 log.go:172] (0xc001461550) (0xc002f1a460) Create stream I0908 23:54:02.532649 6 log.go:172] (0xc001461550) (0xc002f1a460) Stream added, broadcasting: 5 I0908 23:54:02.534108 6 log.go:172] (0xc001461550) Reply frame received for 5 I0908 23:54:02.605610 6 log.go:172] (0xc001461550) Data frame received for 3 I0908 23:54:02.605639 6 log.go:172] (0xc0023fe0a0) (3) Data frame handling I0908 23:54:02.605658 6 log.go:172] (0xc0023fe0a0) (3) Data frame sent I0908 23:54:02.606494 6 log.go:172] (0xc001461550) Data frame received for 5 I0908 23:54:02.606516 6 log.go:172] (0xc002f1a460) (5) Data frame handling I0908 23:54:02.606552 6 log.go:172] (0xc001461550) Data frame received for 3 I0908 23:54:02.606570 6 log.go:172] (0xc0023fe0a0) (3) Data frame handling I0908 23:54:02.608230 6 log.go:172] (0xc001461550) Data frame received for 1 I0908 23:54:02.608265 6 log.go:172] (0xc001b24aa0) (1) Data frame handling I0908 23:54:02.608283 6 log.go:172] (0xc001b24aa0) (1) Data frame sent I0908 23:54:02.608291 6 log.go:172] (0xc001461550) (0xc001b24aa0) Stream removed, broadcasting: 1 I0908 23:54:02.608363 6 log.go:172] (0xc001461550) (0xc001b24aa0) Stream removed, broadcasting: 1 I0908 23:54:02.608380 6 log.go:172] (0xc001461550) (0xc0023fe0a0) Stream removed, broadcasting: 3 I0908 23:54:02.608542 6 log.go:172] (0xc001461550) Go away received I0908 23:54:02.608584 6 log.go:172] (0xc001461550) (0xc002f1a460) Stream removed, broadcasting: 5 Sep 8 23:54:02.608: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:54:02.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4274" for this suite. Sep 8 23:54:26.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:54:26.742: INFO: namespace pod-network-test-4274 deletion completed in 24.129199375s • [SLOW TEST:46.709 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:54:26.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-7b06e7d4-ae3b-4986-ba64-576f2b2f37d9 STEP: Creating a pod to test consume configMaps Sep 8 23:54:26.842: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f363415-9c80-43d8-b646-c378d075709b" in namespace "configmap-6594" to be "success or failure" Sep 8 23:54:26.862: INFO: Pod "pod-configmaps-8f363415-9c80-43d8-b646-c378d075709b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.232335ms Sep 8 23:54:28.866: INFO: Pod "pod-configmaps-8f363415-9c80-43d8-b646-c378d075709b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024648501s Sep 8 23:54:30.870: INFO: Pod "pod-configmaps-8f363415-9c80-43d8-b646-c378d075709b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02849928s STEP: Saw pod success Sep 8 23:54:30.870: INFO: Pod "pod-configmaps-8f363415-9c80-43d8-b646-c378d075709b" satisfied condition "success or failure" Sep 8 23:54:30.873: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-8f363415-9c80-43d8-b646-c378d075709b container configmap-volume-test: STEP: delete the pod Sep 8 23:54:30.905: INFO: Waiting for pod pod-configmaps-8f363415-9c80-43d8-b646-c378d075709b to disappear Sep 8 23:54:30.945: INFO: Pod pod-configmaps-8f363415-9c80-43d8-b646-c378d075709b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:54:30.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6594" for this suite. Sep 8 23:54:36.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:54:37.033: INFO: namespace configmap-6594 deletion completed in 6.084673701s • [SLOW TEST:10.291 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:54:37.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Sep 8 23:54:41.158: INFO: Pod pod-hostip-db06133d-bc57-42db-8f11-e70fa8441393 has hostIP: 172.18.0.9 [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:54:41.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2538" for this suite. Sep 8 23:55:03.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:55:03.267: INFO: namespace pods-2538 deletion completed in 22.105642371s • [SLOW TEST:26.234 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:55:03.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0908 23:55:15.108299 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 8 23:55:15.108: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:55:15.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1453" for this suite. Sep 8 23:55:25.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:55:25.241: INFO: namespace gc-1453 deletion completed in 10.101157373s • [SLOW TEST:21.973 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:55:25.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Sep 8 23:55:25.289: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:55:25.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8109" for this suite. Sep 8 23:55:31.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:55:31.473: INFO: namespace kubectl-8109 deletion completed in 6.091368916s • [SLOW TEST:6.232 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:55:31.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6273.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6273.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 8 23:55:37.578: INFO: DNS probes using dns-test-facdcb81-ba81-4eda-a49b-4588260cf404 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6273.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6273.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 8 23:55:45.753: INFO: File wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local from pod dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 8 23:55:45.757: INFO: File jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local from pod dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 8 23:55:45.757: INFO: Lookups using dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e failed for: [wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local] Sep 8 23:55:50.762: INFO: File wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local from pod dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 8 23:55:50.766: INFO: File jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local from pod dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 8 23:55:50.766: INFO: Lookups using dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e failed for: [wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local] Sep 8 23:55:55.762: INFO: File wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local from pod dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 8 23:55:55.765: INFO: File jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local from pod dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 8 23:55:55.765: INFO: Lookups using dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e failed for: [wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local] Sep 8 23:56:00.762: INFO: File wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local from pod dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 8 23:56:00.766: INFO: File jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local from pod dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 8 23:56:00.766: INFO: Lookups using dns-6273/dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e failed for: [wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local] Sep 8 23:56:05.765: INFO: DNS probes using dns-test-a0e6df3c-6238-441e-bb79-f4e16fd0a64e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6273.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6273.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6273.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6273.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 8 23:56:14.528: INFO: DNS probes using dns-test-95fabb22-204d-4d8c-b7a0-d4f6629fbb5e succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:56:14.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6273" for this suite. Sep 8 23:56:20.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:56:20.790: INFO: namespace dns-6273 deletion completed in 6.134137178s • [SLOW TEST:49.316 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:56:20.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-mzlk STEP: Creating a pod to test atomic-volume-subpath Sep 8 23:56:20.866: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mzlk" in namespace "subpath-2759" to be "success or failure" Sep 8 23:56:20.869: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.654118ms Sep 8 23:56:22.873: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006348083s Sep 8 23:56:24.876: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Running", Reason="", readiness=true. Elapsed: 4.01014649s Sep 8 23:56:26.881: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Running", Reason="", readiness=true. Elapsed: 6.014538521s Sep 8 23:56:28.885: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Running", Reason="", readiness=true. Elapsed: 8.019203715s Sep 8 23:56:30.890: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Running", Reason="", readiness=true. Elapsed: 10.023432944s Sep 8 23:56:32.893: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Running", Reason="", readiness=true. Elapsed: 12.027112867s Sep 8 23:56:34.898: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Running", Reason="", readiness=true. Elapsed: 14.031472016s Sep 8 23:56:36.902: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Running", Reason="", readiness=true. Elapsed: 16.035399642s Sep 8 23:56:38.905: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Running", Reason="", readiness=true. Elapsed: 18.038801471s Sep 8 23:56:40.909: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Running", Reason="", readiness=true. Elapsed: 20.043220934s Sep 8 23:56:42.914: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Running", Reason="", readiness=true. Elapsed: 22.04753592s Sep 8 23:56:44.918: INFO: Pod "pod-subpath-test-configmap-mzlk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05176195s STEP: Saw pod success Sep 8 23:56:44.918: INFO: Pod "pod-subpath-test-configmap-mzlk" satisfied condition "success or failure" Sep 8 23:56:44.921: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-mzlk container test-container-subpath-configmap-mzlk: STEP: delete the pod Sep 8 23:56:44.944: INFO: Waiting for pod pod-subpath-test-configmap-mzlk to disappear Sep 8 23:56:44.949: INFO: Pod pod-subpath-test-configmap-mzlk no longer exists STEP: Deleting pod pod-subpath-test-configmap-mzlk Sep 8 23:56:44.949: INFO: Deleting pod "pod-subpath-test-configmap-mzlk" in namespace "subpath-2759" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:56:44.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2759" for this suite. Sep 8 23:56:50.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:56:51.063: INFO: namespace subpath-2759 deletion completed in 6.108663143s • [SLOW TEST:30.273 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:56:51.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-5571c45b-f7ac-4d5f-bacd-48f39e065f4b STEP: Creating a pod to test consume configMaps Sep 8 23:56:51.137: INFO: Waiting up to 5m0s for pod "pod-configmaps-2c1a7b5b-034d-44ab-b1c6-f6243b2af3ef" in namespace "configmap-5553" to be "success or failure" Sep 8 23:56:51.198: INFO: Pod "pod-configmaps-2c1a7b5b-034d-44ab-b1c6-f6243b2af3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 60.860662ms Sep 8 23:56:53.203: INFO: Pod "pod-configmaps-2c1a7b5b-034d-44ab-b1c6-f6243b2af3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065902817s Sep 8 23:56:55.207: INFO: Pod "pod-configmaps-2c1a7b5b-034d-44ab-b1c6-f6243b2af3ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069548712s STEP: Saw pod success Sep 8 23:56:55.207: INFO: Pod "pod-configmaps-2c1a7b5b-034d-44ab-b1c6-f6243b2af3ef" satisfied condition "success or failure" Sep 8 23:56:55.209: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-2c1a7b5b-034d-44ab-b1c6-f6243b2af3ef container configmap-volume-test: STEP: delete the pod Sep 8 23:56:55.239: INFO: Waiting for pod pod-configmaps-2c1a7b5b-034d-44ab-b1c6-f6243b2af3ef to disappear Sep 8 23:56:55.243: INFO: Pod pod-configmaps-2c1a7b5b-034d-44ab-b1c6-f6243b2af3ef no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:56:55.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5553" for this suite. Sep 8 23:57:01.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:57:01.331: INFO: namespace configmap-5553 deletion completed in 6.082750021s • [SLOW TEST:10.266 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:57:01.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Sep 8 23:57:01.418: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:57:08.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3882" for this suite. Sep 8 23:57:14.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:57:15.023: INFO: namespace init-container-3882 deletion completed in 6.092089462s • [SLOW TEST:13.692 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:57:15.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-5674 I0908 23:57:15.080119 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5674, replica count: 1 I0908 23:57:16.130478 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0908 23:57:17.130717 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0908 23:57:18.130955 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0908 23:57:19.131252 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 8 23:57:19.365: INFO: Created: latency-svc-458mw Sep 8 23:57:19.373: INFO: Got endpoints: latency-svc-458mw [142.111177ms] Sep 8 23:57:19.405: INFO: Created: latency-svc-mb5s9 Sep 8 23:57:19.418: INFO: Got endpoints: latency-svc-mb5s9 [44.559363ms] Sep 8 23:57:19.444: INFO: Created: latency-svc-rjmtc Sep 8 23:57:19.515: INFO: Got endpoints: latency-svc-rjmtc [141.82001ms] Sep 8 23:57:19.528: INFO: Created: latency-svc-tw9tr Sep 8 23:57:19.542: INFO: Got endpoints: latency-svc-tw9tr [168.86034ms] Sep 8 23:57:19.567: INFO: Created: latency-svc-qr9nm Sep 8 23:57:19.586: INFO: Got endpoints: latency-svc-qr9nm [212.654676ms] Sep 8 23:57:19.610: INFO: Created: latency-svc-r54sc Sep 8 23:57:19.653: INFO: Got endpoints: latency-svc-r54sc [279.377899ms] Sep 8 23:57:19.675: INFO: Created: latency-svc-zt5bz Sep 8 23:57:19.686: INFO: Got endpoints: latency-svc-zt5bz [312.579467ms] Sep 8 23:57:19.712: INFO: Created: latency-svc-nsqxt Sep 8 23:57:19.722: INFO: Got endpoints: latency-svc-nsqxt [348.763753ms] Sep 8 23:57:19.744: INFO: Created: latency-svc-hzpvw Sep 8 23:57:19.826: INFO: Got endpoints: latency-svc-hzpvw [140.43036ms] Sep 8 23:57:19.847: INFO: Created: latency-svc-9gb2j Sep 8 23:57:19.860: INFO: Got endpoints: latency-svc-9gb2j [486.964319ms] Sep 8 23:57:19.897: INFO: Created: latency-svc-jnmq7 Sep 8 23:57:19.915: INFO: Got endpoints: latency-svc-jnmq7 [541.76414ms] Sep 8 23:57:19.964: INFO: Created: latency-svc-wx627 Sep 8 23:57:19.968: INFO: Got endpoints: latency-svc-wx627 [594.818591ms] Sep 8 23:57:19.999: INFO: Created: latency-svc-plx77 Sep 8 23:57:20.011: INFO: Got endpoints: latency-svc-plx77 [637.215977ms] Sep 8 23:57:20.050: INFO: Created: latency-svc-gmsft Sep 8 23:57:20.102: INFO: Got endpoints: latency-svc-gmsft [728.144406ms] Sep 8 23:57:20.116: INFO: Created: latency-svc-lm9n4 Sep 8 23:57:20.134: INFO: Got endpoints: latency-svc-lm9n4 [760.332505ms] Sep 8 23:57:20.173: INFO: Created: latency-svc-vfdkp Sep 8 23:57:20.188: INFO: Got endpoints: latency-svc-vfdkp [814.77169ms] Sep 8 23:57:20.247: INFO: Created: latency-svc-v4tqb Sep 8 23:57:20.249: INFO: Got endpoints: latency-svc-v4tqb [875.172089ms] Sep 8 23:57:20.293: INFO: Created: latency-svc-5m8np Sep 8 23:57:20.309: INFO: Got endpoints: latency-svc-5m8np [890.972928ms] Sep 8 23:57:20.332: INFO: Created: latency-svc-vsx5d Sep 8 23:57:20.345: INFO: Got endpoints: latency-svc-vsx5d [829.579614ms] Sep 8 23:57:20.403: INFO: Created: latency-svc-tdvlh Sep 8 23:57:20.436: INFO: Got endpoints: latency-svc-tdvlh [893.978659ms] Sep 8 23:57:20.464: INFO: Created: latency-svc-hgh47 Sep 8 23:57:20.477: INFO: Got endpoints: latency-svc-hgh47 [891.079654ms] Sep 8 23:57:20.533: INFO: Created: latency-svc-xqzfp Sep 8 23:57:20.536: INFO: Got endpoints: latency-svc-xqzfp [883.676904ms] Sep 8 23:57:20.581: INFO: Created: latency-svc-fkzwb Sep 8 23:57:20.596: INFO: Got endpoints: latency-svc-fkzwb [874.295805ms] Sep 8 23:57:20.620: INFO: Created: latency-svc-wg87x Sep 8 23:57:20.665: INFO: Got endpoints: latency-svc-wg87x [838.348176ms] Sep 8 23:57:20.691: INFO: Created: latency-svc-27rbx Sep 8 23:57:20.710: INFO: Got endpoints: latency-svc-27rbx [849.539502ms] Sep 8 23:57:20.733: INFO: Created: latency-svc-cq5gl Sep 8 23:57:20.752: INFO: Got endpoints: latency-svc-cq5gl [836.909034ms] Sep 8 23:57:20.809: INFO: Created: latency-svc-2cg5p Sep 8 23:57:20.811: INFO: Got endpoints: latency-svc-2cg5p [843.212863ms] Sep 8 23:57:20.839: INFO: Created: latency-svc-dfm8p Sep 8 23:57:20.857: INFO: Got endpoints: latency-svc-dfm8p [846.063306ms] Sep 8 23:57:20.890: INFO: Created: latency-svc-gpt68 Sep 8 23:57:20.952: INFO: Got endpoints: latency-svc-gpt68 [850.16228ms] Sep 8 23:57:20.978: INFO: Created: latency-svc-fq44g Sep 8 23:57:20.986: INFO: Got endpoints: latency-svc-fq44g [852.498847ms] Sep 8 23:57:21.016: INFO: Created: latency-svc-55k6t Sep 8 23:57:21.029: INFO: Got endpoints: latency-svc-55k6t [840.298964ms] Sep 8 23:57:21.096: INFO: Created: latency-svc-2t24z Sep 8 23:57:21.100: INFO: Got endpoints: latency-svc-2t24z [851.72624ms] Sep 8 23:57:21.126: INFO: Created: latency-svc-5x588 Sep 8 23:57:21.144: INFO: Got endpoints: latency-svc-5x588 [835.345913ms] Sep 8 23:57:21.177: INFO: Created: latency-svc-g99z4 Sep 8 23:57:21.191: INFO: Got endpoints: latency-svc-g99z4 [846.229788ms] Sep 8 23:57:21.240: INFO: Created: latency-svc-p5zz2 Sep 8 23:57:21.242: INFO: Got endpoints: latency-svc-p5zz2 [806.147303ms] Sep 8 23:57:21.267: INFO: Created: latency-svc-frwgs Sep 8 23:57:21.282: INFO: Got endpoints: latency-svc-frwgs [804.359901ms] Sep 8 23:57:21.328: INFO: Created: latency-svc-p2qbg Sep 8 23:57:21.364: INFO: Got endpoints: latency-svc-p2qbg [827.204646ms] Sep 8 23:57:21.384: INFO: Created: latency-svc-mcd9s Sep 8 23:57:21.396: INFO: Got endpoints: latency-svc-mcd9s [799.476194ms] Sep 8 23:57:21.429: INFO: Created: latency-svc-bsfn4 Sep 8 23:57:21.444: INFO: Got endpoints: latency-svc-bsfn4 [779.536901ms] Sep 8 23:57:21.503: INFO: Created: latency-svc-hznqs Sep 8 23:57:21.507: INFO: Got endpoints: latency-svc-hznqs [796.758979ms] Sep 8 23:57:21.537: INFO: Created: latency-svc-54lvw Sep 8 23:57:21.553: INFO: Got endpoints: latency-svc-54lvw [800.869909ms] Sep 8 23:57:21.576: INFO: Created: latency-svc-fwb6h Sep 8 23:57:21.600: INFO: Got endpoints: latency-svc-fwb6h [788.723989ms] Sep 8 23:57:21.659: INFO: Created: latency-svc-9whk8 Sep 8 23:57:21.661: INFO: Got endpoints: latency-svc-9whk8 [804.517044ms] Sep 8 23:57:21.690: INFO: Created: latency-svc-gmqn5 Sep 8 23:57:21.703: INFO: Got endpoints: latency-svc-gmqn5 [751.57299ms] Sep 8 23:57:21.729: INFO: Created: latency-svc-rwncx Sep 8 23:57:21.746: INFO: Got endpoints: latency-svc-rwncx [759.786759ms] Sep 8 23:57:21.797: INFO: Created: latency-svc-qvxk8 Sep 8 23:57:21.799: INFO: Got endpoints: latency-svc-qvxk8 [770.372313ms] Sep 8 23:57:21.825: INFO: Created: latency-svc-mhxcp Sep 8 23:57:21.842: INFO: Got endpoints: latency-svc-mhxcp [741.454689ms] Sep 8 23:57:21.864: INFO: Created: latency-svc-cdntw Sep 8 23:57:21.885: INFO: Got endpoints: latency-svc-cdntw [740.289856ms] Sep 8 23:57:21.946: INFO: Created: latency-svc-xsv82 Sep 8 23:57:21.975: INFO: Got endpoints: latency-svc-xsv82 [783.71019ms] Sep 8 23:57:22.023: INFO: Created: latency-svc-dsfj4 Sep 8 23:57:22.042: INFO: Got endpoints: latency-svc-dsfj4 [799.112377ms] Sep 8 23:57:22.084: INFO: Created: latency-svc-95fsm Sep 8 23:57:22.110: INFO: Got endpoints: latency-svc-95fsm [828.079647ms] Sep 8 23:57:22.172: INFO: Created: latency-svc-5qntt Sep 8 23:57:22.203: INFO: Got endpoints: latency-svc-5qntt [839.430263ms] Sep 8 23:57:22.248: INFO: Created: latency-svc-nw9q7 Sep 8 23:57:22.263: INFO: Got endpoints: latency-svc-nw9q7 [867.047839ms] Sep 8 23:57:22.287: INFO: Created: latency-svc-8pjkv Sep 8 23:57:22.341: INFO: Got endpoints: latency-svc-8pjkv [896.639282ms] Sep 8 23:57:22.346: INFO: Created: latency-svc-55hp5 Sep 8 23:57:22.360: INFO: Got endpoints: latency-svc-55hp5 [852.968558ms] Sep 8 23:57:22.383: INFO: Created: latency-svc-zddrc Sep 8 23:57:22.396: INFO: Got endpoints: latency-svc-zddrc [842.741592ms] Sep 8 23:57:22.421: INFO: Created: latency-svc-rtnsg Sep 8 23:57:22.438: INFO: Got endpoints: latency-svc-rtnsg [837.93544ms] Sep 8 23:57:22.491: INFO: Created: latency-svc-6bdnc Sep 8 23:57:22.494: INFO: Got endpoints: latency-svc-6bdnc [832.62836ms] Sep 8 23:57:22.524: INFO: Created: latency-svc-wgqrv Sep 8 23:57:22.535: INFO: Got endpoints: latency-svc-wgqrv [831.275447ms] Sep 8 23:57:22.569: INFO: Created: latency-svc-4hfds Sep 8 23:57:22.583: INFO: Got endpoints: latency-svc-4hfds [836.877784ms] Sep 8 23:57:22.634: INFO: Created: latency-svc-mvnjk Sep 8 23:57:22.643: INFO: Got endpoints: latency-svc-mvnjk [844.098812ms] Sep 8 23:57:22.695: INFO: Created: latency-svc-xwkjm Sep 8 23:57:22.781: INFO: Got endpoints: latency-svc-xwkjm [939.589896ms] Sep 8 23:57:22.824: INFO: Created: latency-svc-8cfm5 Sep 8 23:57:22.836: INFO: Got endpoints: latency-svc-8cfm5 [950.959327ms] Sep 8 23:57:22.910: INFO: Created: latency-svc-bj8pl Sep 8 23:57:22.932: INFO: Got endpoints: latency-svc-bj8pl [956.983184ms] Sep 8 23:57:22.965: INFO: Created: latency-svc-68x97 Sep 8 23:57:22.980: INFO: Got endpoints: latency-svc-68x97 [938.476394ms] Sep 8 23:57:23.000: INFO: Created: latency-svc-h5pvz Sep 8 23:57:23.041: INFO: Got endpoints: latency-svc-h5pvz [931.640577ms] Sep 8 23:57:23.057: INFO: Created: latency-svc-w59hr Sep 8 23:57:23.088: INFO: Got endpoints: latency-svc-w59hr [884.589735ms] Sep 8 23:57:23.117: INFO: Created: latency-svc-rrmnp Sep 8 23:57:23.130: INFO: Got endpoints: latency-svc-rrmnp [867.357946ms] Sep 8 23:57:23.174: INFO: Created: latency-svc-brszt Sep 8 23:57:23.197: INFO: Got endpoints: latency-svc-brszt [856.238248ms] Sep 8 23:57:23.222: INFO: Created: latency-svc-lmp74 Sep 8 23:57:23.239: INFO: Got endpoints: latency-svc-lmp74 [879.346623ms] Sep 8 23:57:23.258: INFO: Created: latency-svc-5z4xq Sep 8 23:57:23.323: INFO: Got endpoints: latency-svc-5z4xq [926.944618ms] Sep 8 23:57:23.334: INFO: Created: latency-svc-bs4x9 Sep 8 23:57:23.341: INFO: Got endpoints: latency-svc-bs4x9 [902.914211ms] Sep 8 23:57:23.370: INFO: Created: latency-svc-q9l52 Sep 8 23:57:23.380: INFO: Got endpoints: latency-svc-q9l52 [886.296352ms] Sep 8 23:57:23.420: INFO: Created: latency-svc-q49qf Sep 8 23:57:23.467: INFO: Got endpoints: latency-svc-q49qf [931.988561ms] Sep 8 23:57:23.474: INFO: Created: latency-svc-pbf5d Sep 8 23:57:23.492: INFO: Got endpoints: latency-svc-pbf5d [909.225171ms] Sep 8 23:57:23.516: INFO: Created: latency-svc-mlm9r Sep 8 23:57:23.534: INFO: Got endpoints: latency-svc-mlm9r [891.091249ms] Sep 8 23:57:23.561: INFO: Created: latency-svc-tg22g Sep 8 23:57:23.605: INFO: Got endpoints: latency-svc-tg22g [822.988489ms] Sep 8 23:57:23.621: INFO: Created: latency-svc-rtxxz Sep 8 23:57:23.631: INFO: Got endpoints: latency-svc-rtxxz [795.205186ms] Sep 8 23:57:23.663: INFO: Created: latency-svc-4f8w5 Sep 8 23:57:23.673: INFO: Got endpoints: latency-svc-4f8w5 [741.220429ms] Sep 8 23:57:23.699: INFO: Created: latency-svc-rqlxx Sep 8 23:57:23.736: INFO: Got endpoints: latency-svc-rqlxx [756.255421ms] Sep 8 23:57:23.749: INFO: Created: latency-svc-ghhd8 Sep 8 23:57:23.764: INFO: Got endpoints: latency-svc-ghhd8 [722.575202ms] Sep 8 23:57:23.792: INFO: Created: latency-svc-m6dmn Sep 8 23:57:23.806: INFO: Got endpoints: latency-svc-m6dmn [717.970627ms] Sep 8 23:57:23.828: INFO: Created: latency-svc-jqkdw Sep 8 23:57:23.874: INFO: Got endpoints: latency-svc-jqkdw [743.355736ms] Sep 8 23:57:23.884: INFO: Created: latency-svc-xpbb5 Sep 8 23:57:23.903: INFO: Got endpoints: latency-svc-xpbb5 [705.234598ms] Sep 8 23:57:23.928: INFO: Created: latency-svc-cfmm4 Sep 8 23:57:23.969: INFO: Got endpoints: latency-svc-cfmm4 [729.332765ms] Sep 8 23:57:24.026: INFO: Created: latency-svc-qgz2d Sep 8 23:57:24.047: INFO: Got endpoints: latency-svc-qgz2d [724.018919ms] Sep 8 23:57:24.074: INFO: Created: latency-svc-5xwtr Sep 8 23:57:24.089: INFO: Got endpoints: latency-svc-5xwtr [747.778402ms] Sep 8 23:57:24.173: INFO: Created: latency-svc-z2djw Sep 8 23:57:24.176: INFO: Got endpoints: latency-svc-z2djw [795.858967ms] Sep 8 23:57:24.203: INFO: Created: latency-svc-rhkgf Sep 8 23:57:24.221: INFO: Got endpoints: latency-svc-rhkgf [754.577559ms] Sep 8 23:57:24.253: INFO: Created: latency-svc-nbqmd Sep 8 23:57:24.299: INFO: Got endpoints: latency-svc-nbqmd [806.710823ms] Sep 8 23:57:24.320: INFO: Created: latency-svc-v4mtv Sep 8 23:57:24.336: INFO: Got endpoints: latency-svc-v4mtv [801.80866ms] Sep 8 23:57:24.362: INFO: Created: latency-svc-d899h Sep 8 23:57:24.378: INFO: Got endpoints: latency-svc-d899h [773.523974ms] Sep 8 23:57:24.437: INFO: Created: latency-svc-m7vwx Sep 8 23:57:24.440: INFO: Got endpoints: latency-svc-m7vwx [809.242364ms] Sep 8 23:57:24.472: INFO: Created: latency-svc-7sphk Sep 8 23:57:24.486: INFO: Got endpoints: latency-svc-7sphk [813.238184ms] Sep 8 23:57:24.521: INFO: Created: latency-svc-zz4bc Sep 8 23:57:24.535: INFO: Got endpoints: latency-svc-zz4bc [798.742513ms] Sep 8 23:57:24.617: INFO: Created: latency-svc-qfkvn Sep 8 23:57:24.620: INFO: Got endpoints: latency-svc-qfkvn [856.14817ms] Sep 8 23:57:24.662: INFO: Created: latency-svc-qm2wf Sep 8 23:57:24.680: INFO: Got endpoints: latency-svc-qm2wf [874.064784ms] Sep 8 23:57:24.700: INFO: Created: latency-svc-lmhr2 Sep 8 23:57:24.784: INFO: Got endpoints: latency-svc-lmhr2 [910.250163ms] Sep 8 23:57:24.806: INFO: Created: latency-svc-bk2n6 Sep 8 23:57:24.818: INFO: Got endpoints: latency-svc-bk2n6 [915.378107ms] Sep 8 23:57:24.853: INFO: Created: latency-svc-ksvnj Sep 8 23:57:24.872: INFO: Got endpoints: latency-svc-ksvnj [903.686166ms] Sep 8 23:57:24.922: INFO: Created: latency-svc-ftzzn Sep 8 23:57:24.926: INFO: Got endpoints: latency-svc-ftzzn [879.204204ms] Sep 8 23:57:24.953: INFO: Created: latency-svc-bxzk4 Sep 8 23:57:24.956: INFO: Got endpoints: latency-svc-bxzk4 [867.222681ms] Sep 8 23:57:24.988: INFO: Created: latency-svc-l6dgh Sep 8 23:57:24.993: INFO: Got endpoints: latency-svc-l6dgh [816.327246ms] Sep 8 23:57:25.066: INFO: Created: latency-svc-kpbpb Sep 8 23:57:25.069: INFO: Got endpoints: latency-svc-kpbpb [847.109879ms] Sep 8 23:57:25.093: INFO: Created: latency-svc-r9gcd Sep 8 23:57:25.113: INFO: Got endpoints: latency-svc-r9gcd [813.763964ms] Sep 8 23:57:25.163: INFO: Created: latency-svc-l6mps Sep 8 23:57:25.198: INFO: Got endpoints: latency-svc-l6mps [861.653447ms] Sep 8 23:57:25.217: INFO: Created: latency-svc-rlmcn Sep 8 23:57:25.234: INFO: Got endpoints: latency-svc-rlmcn [855.421252ms] Sep 8 23:57:25.259: INFO: Created: latency-svc-fnjs9 Sep 8 23:57:25.365: INFO: Created: latency-svc-l22xp Sep 8 23:57:25.365: INFO: Got endpoints: latency-svc-fnjs9 [925.157895ms] Sep 8 23:57:25.368: INFO: Got endpoints: latency-svc-l22xp [881.936837ms] Sep 8 23:57:25.393: INFO: Created: latency-svc-rpzqw Sep 8 23:57:25.429: INFO: Got endpoints: latency-svc-rpzqw [893.764208ms] Sep 8 23:57:25.515: INFO: Created: latency-svc-dfdlc Sep 8 23:57:25.517: INFO: Got endpoints: latency-svc-dfdlc [897.223818ms] Sep 8 23:57:25.546: INFO: Created: latency-svc-627zl Sep 8 23:57:25.559: INFO: Got endpoints: latency-svc-627zl [878.470049ms] Sep 8 23:57:25.591: INFO: Created: latency-svc-bvwrc Sep 8 23:57:25.607: INFO: Got endpoints: latency-svc-bvwrc [822.507501ms] Sep 8 23:57:25.671: INFO: Created: latency-svc-rg44x Sep 8 23:57:25.673: INFO: Got endpoints: latency-svc-rg44x [855.089389ms] Sep 8 23:57:25.699: INFO: Created: latency-svc-kmkr8 Sep 8 23:57:25.729: INFO: Got endpoints: latency-svc-kmkr8 [856.477788ms] Sep 8 23:57:25.762: INFO: Created: latency-svc-vnd85 Sep 8 23:57:25.832: INFO: Got endpoints: latency-svc-vnd85 [905.658509ms] Sep 8 23:57:25.835: INFO: Created: latency-svc-j59s4 Sep 8 23:57:25.837: INFO: Got endpoints: latency-svc-j59s4 [880.611187ms] Sep 8 23:57:25.864: INFO: Created: latency-svc-qgtmv Sep 8 23:57:25.872: INFO: Got endpoints: latency-svc-qgtmv [879.158286ms] Sep 8 23:57:25.897: INFO: Created: latency-svc-8djqt Sep 8 23:57:25.914: INFO: Got endpoints: latency-svc-8djqt [845.706102ms] Sep 8 23:57:25.970: INFO: Created: latency-svc-g5nwm Sep 8 23:57:25.992: INFO: Got endpoints: latency-svc-g5nwm [879.462762ms] Sep 8 23:57:26.038: INFO: Created: latency-svc-9dm66 Sep 8 23:57:26.046: INFO: Got endpoints: latency-svc-9dm66 [848.472671ms] Sep 8 23:57:26.114: INFO: Created: latency-svc-sjv8k Sep 8 23:57:26.117: INFO: Got endpoints: latency-svc-sjv8k [883.219371ms] Sep 8 23:57:26.173: INFO: Created: latency-svc-28vr2 Sep 8 23:57:26.191: INFO: Got endpoints: latency-svc-28vr2 [825.720645ms] Sep 8 23:57:26.257: INFO: Created: latency-svc-cg8d9 Sep 8 23:57:26.260: INFO: Got endpoints: latency-svc-cg8d9 [891.975405ms] Sep 8 23:57:26.293: INFO: Created: latency-svc-gh7cg Sep 8 23:57:26.312: INFO: Got endpoints: latency-svc-gh7cg [883.254272ms] Sep 8 23:57:26.336: INFO: Created: latency-svc-7f9c4 Sep 8 23:57:26.348: INFO: Got endpoints: latency-svc-7f9c4 [830.711229ms] Sep 8 23:57:26.395: INFO: Created: latency-svc-v5hqv Sep 8 23:57:26.409: INFO: Got endpoints: latency-svc-v5hqv [850.418323ms] Sep 8 23:57:26.441: INFO: Created: latency-svc-ppxtb Sep 8 23:57:26.450: INFO: Got endpoints: latency-svc-ppxtb [843.159121ms] Sep 8 23:57:26.478: INFO: Created: latency-svc-9lgzp Sep 8 23:57:26.492: INFO: Got endpoints: latency-svc-9lgzp [819.216803ms] Sep 8 23:57:26.535: INFO: Created: latency-svc-fq85w Sep 8 23:57:26.568: INFO: Got endpoints: latency-svc-fq85w [839.266776ms] Sep 8 23:57:26.569: INFO: Created: latency-svc-78h5m Sep 8 23:57:26.590: INFO: Got endpoints: latency-svc-78h5m [757.652402ms] Sep 8 23:57:26.673: INFO: Created: latency-svc-2cvz2 Sep 8 23:57:26.703: INFO: Got endpoints: latency-svc-2cvz2 [866.423449ms] Sep 8 23:57:26.743: INFO: Created: latency-svc-4sf4m Sep 8 23:57:26.757: INFO: Got endpoints: latency-svc-4sf4m [885.487781ms] Sep 8 23:57:26.822: INFO: Created: latency-svc-97fr9 Sep 8 23:57:26.835: INFO: Got endpoints: latency-svc-97fr9 [920.868047ms] Sep 8 23:57:26.877: INFO: Created: latency-svc-kqnjj Sep 8 23:57:26.890: INFO: Got endpoints: latency-svc-kqnjj [897.112927ms] Sep 8 23:57:26.970: INFO: Created: latency-svc-vjdd9 Sep 8 23:57:26.973: INFO: Got endpoints: latency-svc-vjdd9 [926.537613ms] Sep 8 23:57:27.007: INFO: Created: latency-svc-7wxjw Sep 8 23:57:27.036: INFO: Got endpoints: latency-svc-7wxjw [918.947293ms] Sep 8 23:57:27.066: INFO: Created: latency-svc-4n6jz Sep 8 23:57:27.107: INFO: Got endpoints: latency-svc-4n6jz [916.276772ms] Sep 8 23:57:27.114: INFO: Created: latency-svc-7kdj8 Sep 8 23:57:27.130: INFO: Got endpoints: latency-svc-7kdj8 [869.575981ms] Sep 8 23:57:27.153: INFO: Created: latency-svc-zfsrj Sep 8 23:57:27.178: INFO: Got endpoints: latency-svc-zfsrj [865.960877ms] Sep 8 23:57:27.201: INFO: Created: latency-svc-gkjrz Sep 8 23:57:27.245: INFO: Got endpoints: latency-svc-gkjrz [896.680782ms] Sep 8 23:57:27.263: INFO: Created: latency-svc-7scrm Sep 8 23:57:27.281: INFO: Got endpoints: latency-svc-7scrm [871.770311ms] Sep 8 23:57:27.300: INFO: Created: latency-svc-22kmx Sep 8 23:57:27.311: INFO: Got endpoints: latency-svc-22kmx [860.853223ms] Sep 8 23:57:27.342: INFO: Created: latency-svc-krzt2 Sep 8 23:57:27.395: INFO: Got endpoints: latency-svc-krzt2 [902.296507ms] Sep 8 23:57:27.417: INFO: Created: latency-svc-9tfb6 Sep 8 23:57:27.432: INFO: Got endpoints: latency-svc-9tfb6 [863.925553ms] Sep 8 23:57:27.459: INFO: Created: latency-svc-kstlz Sep 8 23:57:27.474: INFO: Got endpoints: latency-svc-kstlz [884.083839ms] Sep 8 23:57:27.495: INFO: Created: latency-svc-52gnc Sep 8 23:57:27.532: INFO: Got endpoints: latency-svc-52gnc [828.944723ms] Sep 8 23:57:27.569: INFO: Created: latency-svc-t5ldw Sep 8 23:57:27.588: INFO: Got endpoints: latency-svc-t5ldw [830.914871ms] Sep 8 23:57:27.612: INFO: Created: latency-svc-4zktj Sep 8 23:57:27.671: INFO: Got endpoints: latency-svc-4zktj [835.192616ms] Sep 8 23:57:27.699: INFO: Created: latency-svc-25czn Sep 8 23:57:27.714: INFO: Got endpoints: latency-svc-25czn [824.749045ms] Sep 8 23:57:27.735: INFO: Created: latency-svc-t6fp7 Sep 8 23:57:27.751: INFO: Got endpoints: latency-svc-t6fp7 [777.949842ms] Sep 8 23:57:27.811: INFO: Created: latency-svc-65djl Sep 8 23:57:27.812: INFO: Got endpoints: latency-svc-65djl [776.208409ms] Sep 8 23:57:27.841: INFO: Created: latency-svc-nbtx2 Sep 8 23:57:27.853: INFO: Got endpoints: latency-svc-nbtx2 [745.571137ms] Sep 8 23:57:27.876: INFO: Created: latency-svc-tg4rm Sep 8 23:57:27.884: INFO: Got endpoints: latency-svc-tg4rm [753.651098ms] Sep 8 23:57:27.905: INFO: Created: latency-svc-778n2 Sep 8 23:57:27.939: INFO: Got endpoints: latency-svc-778n2 [760.895963ms] Sep 8 23:57:27.956: INFO: Created: latency-svc-bj4w2 Sep 8 23:57:27.975: INFO: Got endpoints: latency-svc-bj4w2 [729.556199ms] Sep 8 23:57:28.017: INFO: Created: latency-svc-2nd5h Sep 8 23:57:28.096: INFO: Got endpoints: latency-svc-2nd5h [814.722652ms] Sep 8 23:57:28.098: INFO: Created: latency-svc-9lmhz Sep 8 23:57:28.113: INFO: Got endpoints: latency-svc-9lmhz [802.167881ms] Sep 8 23:57:28.161: INFO: Created: latency-svc-kd75m Sep 8 23:57:28.173: INFO: Got endpoints: latency-svc-kd75m [778.1561ms] Sep 8 23:57:28.193: INFO: Created: latency-svc-9nd8d Sep 8 23:57:28.275: INFO: Got endpoints: latency-svc-9nd8d [842.933013ms] Sep 8 23:57:28.299: INFO: Created: latency-svc-jfp2z Sep 8 23:57:28.329: INFO: Got endpoints: latency-svc-jfp2z [855.156854ms] Sep 8 23:57:28.373: INFO: Created: latency-svc-nrw4t Sep 8 23:57:28.413: INFO: Got endpoints: latency-svc-nrw4t [880.925088ms] Sep 8 23:57:28.421: INFO: Created: latency-svc-5hw7r Sep 8 23:57:28.438: INFO: Got endpoints: latency-svc-5hw7r [849.886258ms] Sep 8 23:57:28.457: INFO: Created: latency-svc-zwz8p Sep 8 23:57:28.469: INFO: Got endpoints: latency-svc-zwz8p [798.283412ms] Sep 8 23:57:28.491: INFO: Created: latency-svc-dskbq Sep 8 23:57:28.505: INFO: Got endpoints: latency-svc-dskbq [790.213315ms] Sep 8 23:57:28.551: INFO: Created: latency-svc-xgzlh Sep 8 23:57:28.559: INFO: Got endpoints: latency-svc-xgzlh [807.986057ms] Sep 8 23:57:28.587: INFO: Created: latency-svc-k4lb4 Sep 8 23:57:28.601: INFO: Got endpoints: latency-svc-k4lb4 [788.905406ms] Sep 8 23:57:28.625: INFO: Created: latency-svc-5mhlw Sep 8 23:57:28.694: INFO: Got endpoints: latency-svc-5mhlw [840.878148ms] Sep 8 23:57:28.734: INFO: Created: latency-svc-65wgk Sep 8 23:57:28.764: INFO: Got endpoints: latency-svc-65wgk [879.831522ms] Sep 8 23:57:28.838: INFO: Created: latency-svc-sx7q4 Sep 8 23:57:28.847: INFO: Got endpoints: latency-svc-sx7q4 [908.143255ms] Sep 8 23:57:28.919: INFO: Created: latency-svc-vcqj4 Sep 8 23:57:28.982: INFO: Got endpoints: latency-svc-vcqj4 [1.00714012s] Sep 8 23:57:28.984: INFO: Created: latency-svc-z4zm4 Sep 8 23:57:28.992: INFO: Got endpoints: latency-svc-z4zm4 [896.380012ms] Sep 8 23:57:29.027: INFO: Created: latency-svc-v8cg5 Sep 8 23:57:29.065: INFO: Got endpoints: latency-svc-v8cg5 [952.262234ms] Sep 8 23:57:29.120: INFO: Created: latency-svc-b4rhw Sep 8 23:57:29.137: INFO: Got endpoints: latency-svc-b4rhw [963.902978ms] Sep 8 23:57:29.156: INFO: Created: latency-svc-pwptf Sep 8 23:57:29.173: INFO: Got endpoints: latency-svc-pwptf [897.858098ms] Sep 8 23:57:29.195: INFO: Created: latency-svc-mp8g2 Sep 8 23:57:29.209: INFO: Got endpoints: latency-svc-mp8g2 [879.979852ms] Sep 8 23:57:29.263: INFO: Created: latency-svc-vt8kp Sep 8 23:57:29.269: INFO: Got endpoints: latency-svc-vt8kp [855.566774ms] Sep 8 23:57:29.297: INFO: Created: latency-svc-fc5gx Sep 8 23:57:29.305: INFO: Got endpoints: latency-svc-fc5gx [867.151768ms] Sep 8 23:57:29.336: INFO: Created: latency-svc-ds8hd Sep 8 23:57:29.360: INFO: Got endpoints: latency-svc-ds8hd [890.94584ms] Sep 8 23:57:29.413: INFO: Created: latency-svc-d7znm Sep 8 23:57:29.440: INFO: Got endpoints: latency-svc-d7znm [935.59411ms] Sep 8 23:57:29.495: INFO: Created: latency-svc-t4vk6 Sep 8 23:57:29.562: INFO: Got endpoints: latency-svc-t4vk6 [1.00335253s] Sep 8 23:57:29.576: INFO: Created: latency-svc-r8mtn Sep 8 23:57:29.594: INFO: Got endpoints: latency-svc-r8mtn [993.255639ms] Sep 8 23:57:29.626: INFO: Created: latency-svc-zl957 Sep 8 23:57:29.650: INFO: Got endpoints: latency-svc-zl957 [955.386832ms] Sep 8 23:57:29.694: INFO: Created: latency-svc-9wgcs Sep 8 23:57:29.703: INFO: Got endpoints: latency-svc-9wgcs [939.250737ms] Sep 8 23:57:29.723: INFO: Created: latency-svc-nxkwv Sep 8 23:57:29.764: INFO: Got endpoints: latency-svc-nxkwv [916.920756ms] Sep 8 23:57:29.832: INFO: Created: latency-svc-mhhlp Sep 8 23:57:29.841: INFO: Got endpoints: latency-svc-mhhlp [859.278566ms] Sep 8 23:57:29.888: INFO: Created: latency-svc-xkpvz Sep 8 23:57:29.901: INFO: Got endpoints: latency-svc-xkpvz [909.318221ms] Sep 8 23:57:29.985: INFO: Created: latency-svc-jjtx7 Sep 8 23:57:30.014: INFO: Got endpoints: latency-svc-jjtx7 [948.203475ms] Sep 8 23:57:30.014: INFO: Created: latency-svc-fcf9l Sep 8 23:57:30.028: INFO: Got endpoints: latency-svc-fcf9l [891.178928ms] Sep 8 23:57:30.055: INFO: Created: latency-svc-tnjg7 Sep 8 23:57:30.070: INFO: Got endpoints: latency-svc-tnjg7 [897.3666ms] Sep 8 23:57:30.150: INFO: Created: latency-svc-5vnkn Sep 8 23:57:30.154: INFO: Got endpoints: latency-svc-5vnkn [944.912499ms] Sep 8 23:57:30.200: INFO: Created: latency-svc-rfzfm Sep 8 23:57:30.215: INFO: Got endpoints: latency-svc-rfzfm [945.966504ms] Sep 8 23:57:30.235: INFO: Created: latency-svc-t4nbp Sep 8 23:57:30.275: INFO: Got endpoints: latency-svc-t4nbp [969.527588ms] Sep 8 23:57:30.289: INFO: Created: latency-svc-rbfs9 Sep 8 23:57:30.306: INFO: Got endpoints: latency-svc-rbfs9 [945.63352ms] Sep 8 23:57:30.340: INFO: Created: latency-svc-f528r Sep 8 23:57:30.360: INFO: Got endpoints: latency-svc-f528r [919.224789ms] Sep 8 23:57:30.413: INFO: Created: latency-svc-p27hb Sep 8 23:57:30.420: INFO: Got endpoints: latency-svc-p27hb [857.222743ms] Sep 8 23:57:30.442: INFO: Created: latency-svc-j7bm5 Sep 8 23:57:30.469: INFO: Got endpoints: latency-svc-j7bm5 [874.749063ms] Sep 8 23:57:30.500: INFO: Created: latency-svc-d2vgr Sep 8 23:57:30.550: INFO: Got endpoints: latency-svc-d2vgr [900.690012ms] Sep 8 23:57:30.565: INFO: Created: latency-svc-l7qhv Sep 8 23:57:30.583: INFO: Got endpoints: latency-svc-l7qhv [879.626318ms] Sep 8 23:57:30.605: INFO: Created: latency-svc-qdzk2 Sep 8 23:57:30.625: INFO: Got endpoints: latency-svc-qdzk2 [860.525185ms] Sep 8 23:57:30.701: INFO: Created: latency-svc-vv9n7 Sep 8 23:57:30.709: INFO: Got endpoints: latency-svc-vv9n7 [867.649407ms] Sep 8 23:57:30.709: INFO: Latencies: [44.559363ms 140.43036ms 141.82001ms 168.86034ms 212.654676ms 279.377899ms 312.579467ms 348.763753ms 486.964319ms 541.76414ms 594.818591ms 637.215977ms 705.234598ms 717.970627ms 722.575202ms 724.018919ms 728.144406ms 729.332765ms 729.556199ms 740.289856ms 741.220429ms 741.454689ms 743.355736ms 745.571137ms 747.778402ms 751.57299ms 753.651098ms 754.577559ms 756.255421ms 757.652402ms 759.786759ms 760.332505ms 760.895963ms 770.372313ms 773.523974ms 776.208409ms 777.949842ms 778.1561ms 779.536901ms 783.71019ms 788.723989ms 788.905406ms 790.213315ms 795.205186ms 795.858967ms 796.758979ms 798.283412ms 798.742513ms 799.112377ms 799.476194ms 800.869909ms 801.80866ms 802.167881ms 804.359901ms 804.517044ms 806.147303ms 806.710823ms 807.986057ms 809.242364ms 813.238184ms 813.763964ms 814.722652ms 814.77169ms 816.327246ms 819.216803ms 822.507501ms 822.988489ms 824.749045ms 825.720645ms 827.204646ms 828.079647ms 828.944723ms 829.579614ms 830.711229ms 830.914871ms 831.275447ms 832.62836ms 835.192616ms 835.345913ms 836.877784ms 836.909034ms 837.93544ms 838.348176ms 839.266776ms 839.430263ms 840.298964ms 840.878148ms 842.741592ms 842.933013ms 843.159121ms 843.212863ms 844.098812ms 845.706102ms 846.063306ms 846.229788ms 847.109879ms 848.472671ms 849.539502ms 849.886258ms 850.16228ms 850.418323ms 851.72624ms 852.498847ms 852.968558ms 855.089389ms 855.156854ms 855.421252ms 855.566774ms 856.14817ms 856.238248ms 856.477788ms 857.222743ms 859.278566ms 860.525185ms 860.853223ms 861.653447ms 863.925553ms 865.960877ms 866.423449ms 867.047839ms 867.151768ms 867.222681ms 867.357946ms 867.649407ms 869.575981ms 871.770311ms 874.064784ms 874.295805ms 874.749063ms 875.172089ms 878.470049ms 879.158286ms 879.204204ms 879.346623ms 879.462762ms 879.626318ms 879.831522ms 879.979852ms 880.611187ms 880.925088ms 881.936837ms 883.219371ms 883.254272ms 883.676904ms 884.083839ms 884.589735ms 885.487781ms 886.296352ms 890.94584ms 890.972928ms 891.079654ms 891.091249ms 891.178928ms 891.975405ms 893.764208ms 893.978659ms 896.380012ms 896.639282ms 896.680782ms 897.112927ms 897.223818ms 897.3666ms 897.858098ms 900.690012ms 902.296507ms 902.914211ms 903.686166ms 905.658509ms 908.143255ms 909.225171ms 909.318221ms 910.250163ms 915.378107ms 916.276772ms 916.920756ms 918.947293ms 919.224789ms 920.868047ms 925.157895ms 926.537613ms 926.944618ms 931.640577ms 931.988561ms 935.59411ms 938.476394ms 939.250737ms 939.589896ms 944.912499ms 945.63352ms 945.966504ms 948.203475ms 950.959327ms 952.262234ms 955.386832ms 956.983184ms 963.902978ms 969.527588ms 993.255639ms 1.00335253s 1.00714012s] Sep 8 23:57:30.709: INFO: 50 %ile: 850.418323ms Sep 8 23:57:30.709: INFO: 90 %ile: 926.944618ms Sep 8 23:57:30.709: INFO: 99 %ile: 1.00335253s Sep 8 23:57:30.709: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:57:30.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5674" for this suite. Sep 8 23:58:04.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:58:04.822: INFO: namespace svc-latency-5674 deletion completed in 34.099914407s • [SLOW TEST:49.798 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:58:04.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6894356d-6de7-4767-944a-e8dd72b11840 STEP: Creating a pod to test consume secrets Sep 8 23:58:04.920: INFO: Waiting up to 5m0s for pod "pod-secrets-f03abcb1-63a2-4f2a-8602-2fffa600d9ab" in namespace "secrets-3833" to be "success or failure" Sep 8 23:58:04.931: INFO: Pod "pod-secrets-f03abcb1-63a2-4f2a-8602-2fffa600d9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 10.193541ms Sep 8 23:58:06.951: INFO: Pod "pod-secrets-f03abcb1-63a2-4f2a-8602-2fffa600d9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030194215s Sep 8 23:58:08.955: INFO: Pod "pod-secrets-f03abcb1-63a2-4f2a-8602-2fffa600d9ab": Phase="Running", Reason="", readiness=true. Elapsed: 4.034636069s Sep 8 23:58:10.959: INFO: Pod "pod-secrets-f03abcb1-63a2-4f2a-8602-2fffa600d9ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03851241s STEP: Saw pod success Sep 8 23:58:10.959: INFO: Pod "pod-secrets-f03abcb1-63a2-4f2a-8602-2fffa600d9ab" satisfied condition "success or failure" Sep 8 23:58:10.961: INFO: Trying to get logs from node iruya-worker pod pod-secrets-f03abcb1-63a2-4f2a-8602-2fffa600d9ab container secret-volume-test: STEP: delete the pod Sep 8 23:58:10.986: INFO: Waiting for pod pod-secrets-f03abcb1-63a2-4f2a-8602-2fffa600d9ab to disappear Sep 8 23:58:10.990: INFO: Pod pod-secrets-f03abcb1-63a2-4f2a-8602-2fffa600d9ab no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:58:10.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3833" for this suite. Sep 8 23:58:17.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:58:17.115: INFO: namespace secrets-3833 deletion completed in 6.121815847s • [SLOW TEST:12.293 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:58:17.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 8 23:58:21.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4466" for this suite. Sep 8 23:59:07.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 8 23:59:07.322: INFO: namespace kubelet-test-4466 deletion completed in 46.089172253s • [SLOW TEST:50.206 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 8 23:59:07.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-d8abf82d-d582-4ace-8f9b-51c826afb580 in namespace container-probe-9222 Sep 8 23:59:11.436: INFO: Started pod busybox-d8abf82d-d582-4ace-8f9b-51c826afb580 in namespace container-probe-9222 STEP: checking the pod's current state and verifying that restartCount is present Sep 8 23:59:11.439: INFO: Initial restart count of pod busybox-d8abf82d-d582-4ace-8f9b-51c826afb580 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:03:12.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9222" for this suite. Sep 9 00:03:18.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:03:18.140: INFO: namespace container-probe-9222 deletion completed in 6.119423972s • [SLOW TEST:250.818 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:03:18.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Sep 9 00:03:18.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Sep 9 00:03:18.399: INFO: stderr: "" Sep 9 00:03:18.399: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15+\", GitVersion:\"v1.15.13-beta.0.1+a34f1e483104bd\", GitCommit:\"a34f1e483104bd51c3e9a6aec3dbbcf6301789da\", GitTreeState:\"clean\", BuildDate:\"2020-09-07T18:56:50Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:03:18.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4320" for this suite. Sep 9 00:03:24.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:03:24.524: INFO: namespace kubectl-4320 deletion completed in 6.121650043s • [SLOW TEST:6.384 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:03:24.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Sep 9 00:03:24.654: INFO: Waiting up to 5m0s for pod "var-expansion-8215be5b-952d-4228-a64e-463854cd17ab" in namespace "var-expansion-48" to be "success or failure" Sep 9 00:03:24.661: INFO: Pod "var-expansion-8215be5b-952d-4228-a64e-463854cd17ab": Phase="Pending", Reason="", readiness=false. Elapsed: 7.478435ms Sep 9 00:03:26.665: INFO: Pod "var-expansion-8215be5b-952d-4228-a64e-463854cd17ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010916819s Sep 9 00:03:28.669: INFO: Pod "var-expansion-8215be5b-952d-4228-a64e-463854cd17ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01509791s STEP: Saw pod success Sep 9 00:03:28.669: INFO: Pod "var-expansion-8215be5b-952d-4228-a64e-463854cd17ab" satisfied condition "success or failure" Sep 9 00:03:28.672: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-8215be5b-952d-4228-a64e-463854cd17ab container dapi-container: STEP: delete the pod Sep 9 00:03:28.773: INFO: Waiting for pod var-expansion-8215be5b-952d-4228-a64e-463854cd17ab to disappear Sep 9 00:03:28.847: INFO: Pod var-expansion-8215be5b-952d-4228-a64e-463854cd17ab no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:03:28.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-48" for this suite. Sep 9 00:03:34.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:03:35.005: INFO: namespace var-expansion-48 deletion completed in 6.154677224s • [SLOW TEST:10.480 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:03:35.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:03:40.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3285" for this suite. Sep 9 00:04:02.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:04:02.204: INFO: namespace replication-controller-3285 deletion completed in 22.091496277s • [SLOW TEST:27.199 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:04:02.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Sep 9 00:04:06.824: INFO: Successfully updated pod "annotationupdate01fbc646-2198-4fd9-b7de-0a68c1872cb1" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:04:10.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1956" for this suite. Sep 9 00:04:32.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:04:32.946: INFO: namespace downward-api-1956 deletion completed in 22.080196697s • [SLOW TEST:30.741 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:04:32.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Sep 9 00:04:33.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d47bd1e-7000-4a88-b272-02de650e7f62" in namespace "downward-api-9790" to be "success or failure" Sep 9 00:04:33.059: INFO: Pod "downwardapi-volume-5d47bd1e-7000-4a88-b272-02de650e7f62": Phase="Pending", Reason="", readiness=false. Elapsed: 22.67285ms Sep 9 00:04:35.063: INFO: Pod "downwardapi-volume-5d47bd1e-7000-4a88-b272-02de650e7f62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025994112s Sep 9 00:04:37.066: INFO: Pod "downwardapi-volume-5d47bd1e-7000-4a88-b272-02de650e7f62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029925796s STEP: Saw pod success Sep 9 00:04:37.067: INFO: Pod "downwardapi-volume-5d47bd1e-7000-4a88-b272-02de650e7f62" satisfied condition "success or failure" Sep 9 00:04:37.069: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5d47bd1e-7000-4a88-b272-02de650e7f62 container client-container: STEP: delete the pod Sep 9 00:04:37.114: INFO: Waiting for pod downwardapi-volume-5d47bd1e-7000-4a88-b272-02de650e7f62 to disappear Sep 9 00:04:37.123: INFO: Pod downwardapi-volume-5d47bd1e-7000-4a88-b272-02de650e7f62 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:04:37.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9790" for this suite. Sep 9 00:04:43.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:04:43.239: INFO: namespace downward-api-9790 deletion completed in 6.112966458s • [SLOW TEST:10.293 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:04:43.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 9 00:04:43.329: INFO: Waiting up to 5m0s for pod "pod-c6bc5d67-29fd-492a-8ac4-9721ca8966b4" in namespace "emptydir-5283" to be "success or failure" Sep 9 00:04:43.339: INFO: Pod "pod-c6bc5d67-29fd-492a-8ac4-9721ca8966b4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.325568ms Sep 9 00:04:45.342: INFO: Pod "pod-c6bc5d67-29fd-492a-8ac4-9721ca8966b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013444485s Sep 9 00:04:47.345: INFO: Pod "pod-c6bc5d67-29fd-492a-8ac4-9721ca8966b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016870178s STEP: Saw pod success Sep 9 00:04:47.345: INFO: Pod "pod-c6bc5d67-29fd-492a-8ac4-9721ca8966b4" satisfied condition "success or failure" Sep 9 00:04:47.348: INFO: Trying to get logs from node iruya-worker2 pod pod-c6bc5d67-29fd-492a-8ac4-9721ca8966b4 container test-container: STEP: delete the pod Sep 9 00:04:47.382: INFO: Waiting for pod pod-c6bc5d67-29fd-492a-8ac4-9721ca8966b4 to disappear Sep 9 00:04:47.393: INFO: Pod pod-c6bc5d67-29fd-492a-8ac4-9721ca8966b4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:04:47.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5283" for this suite. Sep 9 00:04:53.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:04:53.484: INFO: namespace emptydir-5283 deletion completed in 6.086576744s • [SLOW TEST:10.244 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:04:53.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Sep 9 00:04:53.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-497e45c3-520a-42fe-b7a9-7bf405c1dbfd" in namespace "projected-2754" to be "success or failure" Sep 9 00:04:53.592: INFO: Pod "downwardapi-volume-497e45c3-520a-42fe-b7a9-7bf405c1dbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.284022ms Sep 9 00:04:55.596: INFO: Pod "downwardapi-volume-497e45c3-520a-42fe-b7a9-7bf405c1dbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033087019s Sep 9 00:04:57.600: INFO: Pod "downwardapi-volume-497e45c3-520a-42fe-b7a9-7bf405c1dbfd": Phase="Running", Reason="", readiness=true. Elapsed: 4.03748721s Sep 9 00:04:59.604: INFO: Pod "downwardapi-volume-497e45c3-520a-42fe-b7a9-7bf405c1dbfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041094419s STEP: Saw pod success Sep 9 00:04:59.604: INFO: Pod "downwardapi-volume-497e45c3-520a-42fe-b7a9-7bf405c1dbfd" satisfied condition "success or failure" Sep 9 00:04:59.607: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-497e45c3-520a-42fe-b7a9-7bf405c1dbfd container client-container: STEP: delete the pod Sep 9 00:04:59.672: INFO: Waiting for pod downwardapi-volume-497e45c3-520a-42fe-b7a9-7bf405c1dbfd to disappear Sep 9 00:04:59.692: INFO: Pod downwardapi-volume-497e45c3-520a-42fe-b7a9-7bf405c1dbfd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:04:59.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2754" for this suite. Sep 9 00:05:05.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:05:05.795: INFO: namespace projected-2754 deletion completed in 6.099978485s • [SLOW TEST:12.311 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:05:05.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Sep 9 00:05:05.939: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7416b194-83b5-4775-a288-20fc660871d9" in namespace "downward-api-445" to be "success or failure" Sep 9 00:05:05.987: INFO: Pod "downwardapi-volume-7416b194-83b5-4775-a288-20fc660871d9": Phase="Pending", Reason="", readiness=false. Elapsed: 48.000975ms Sep 9 00:05:08.054: INFO: Pod "downwardapi-volume-7416b194-83b5-4775-a288-20fc660871d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11475872s Sep 9 00:05:10.132: INFO: Pod "downwardapi-volume-7416b194-83b5-4775-a288-20fc660871d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.192658142s STEP: Saw pod success Sep 9 00:05:10.132: INFO: Pod "downwardapi-volume-7416b194-83b5-4775-a288-20fc660871d9" satisfied condition "success or failure" Sep 9 00:05:10.134: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7416b194-83b5-4775-a288-20fc660871d9 container client-container: STEP: delete the pod Sep 9 00:05:10.208: INFO: Waiting for pod downwardapi-volume-7416b194-83b5-4775-a288-20fc660871d9 to disappear Sep 9 00:05:10.219: INFO: Pod downwardapi-volume-7416b194-83b5-4775-a288-20fc660871d9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:05:10.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-445" for this suite. Sep 9 00:05:16.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:05:16.401: INFO: namespace downward-api-445 deletion completed in 6.177823224s • [SLOW TEST:10.605 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:05:16.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Sep 9 00:05:16.464: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1934" to be "success or failure" Sep 9 00:05:16.484: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.067432ms Sep 9 00:05:18.551: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087118394s Sep 9 00:05:20.569: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104810881s Sep 9 00:05:22.573: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108904837s STEP: Saw pod success Sep 9 00:05:22.573: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Sep 9 00:05:22.576: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Sep 9 00:05:22.622: INFO: Waiting for pod pod-host-path-test to disappear Sep 9 00:05:22.627: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:05:22.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1934" for this suite. Sep 9 00:05:28.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:05:28.749: INFO: namespace hostpath-1934 deletion completed in 6.11809928s • [SLOW TEST:12.348 seconds] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:05:28.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0909 00:05:59.378171 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 9 00:05:59.378: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:05:59.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5825" for this suite. Sep 9 00:06:05.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:06:05.456: INFO: namespace gc-5825 deletion completed in 6.074969074s • [SLOW TEST:36.707 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:06:05.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Sep 9 00:06:05.933: INFO: Waiting up to 5m0s for pod "downward-api-cee242f0-a2e0-48b6-a78a-fdc000733173" in namespace "downward-api-4840" to be "success or failure" Sep 9 00:06:05.970: INFO: Pod "downward-api-cee242f0-a2e0-48b6-a78a-fdc000733173": Phase="Pending", Reason="", readiness=false. Elapsed: 36.339487ms Sep 9 00:06:07.974: INFO: Pod "downward-api-cee242f0-a2e0-48b6-a78a-fdc000733173": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040111405s Sep 9 00:06:09.978: INFO: Pod "downward-api-cee242f0-a2e0-48b6-a78a-fdc000733173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044632108s STEP: Saw pod success Sep 9 00:06:09.978: INFO: Pod "downward-api-cee242f0-a2e0-48b6-a78a-fdc000733173" satisfied condition "success or failure" Sep 9 00:06:09.981: INFO: Trying to get logs from node iruya-worker2 pod downward-api-cee242f0-a2e0-48b6-a78a-fdc000733173 container dapi-container: STEP: delete the pod Sep 9 00:06:10.055: INFO: Waiting for pod downward-api-cee242f0-a2e0-48b6-a78a-fdc000733173 to disappear Sep 9 00:06:10.102: INFO: Pod downward-api-cee242f0-a2e0-48b6-a78a-fdc000733173 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:06:10.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4840" for this suite. Sep 9 00:06:16.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:06:16.275: INFO: namespace downward-api-4840 deletion completed in 6.169546046s • [SLOW TEST:10.819 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:06:16.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Sep 9 00:06:21.426: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:06:22.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7345" for this suite. Sep 9 00:06:44.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:06:44.637: INFO: namespace replicaset-7345 deletion completed in 22.188472399s • [SLOW TEST:28.361 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:06:44.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 9 00:06:44.752: INFO: Waiting up to 5m0s for pod "pod-42894bd3-e690-4ab6-983d-e1eeb1853a4c" in namespace "emptydir-7670" to be "success or failure" Sep 9 00:06:44.762: INFO: Pod "pod-42894bd3-e690-4ab6-983d-e1eeb1853a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.741858ms Sep 9 00:06:46.882: INFO: Pod "pod-42894bd3-e690-4ab6-983d-e1eeb1853a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129733915s Sep 9 00:06:48.886: INFO: Pod "pod-42894bd3-e690-4ab6-983d-e1eeb1853a4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133370399s STEP: Saw pod success Sep 9 00:06:48.886: INFO: Pod "pod-42894bd3-e690-4ab6-983d-e1eeb1853a4c" satisfied condition "success or failure" Sep 9 00:06:48.889: INFO: Trying to get logs from node iruya-worker pod pod-42894bd3-e690-4ab6-983d-e1eeb1853a4c container test-container: STEP: delete the pod Sep 9 00:06:48.921: INFO: Waiting for pod pod-42894bd3-e690-4ab6-983d-e1eeb1853a4c to disappear Sep 9 00:06:48.935: INFO: Pod pod-42894bd3-e690-4ab6-983d-e1eeb1853a4c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:06:48.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7670" for this suite. Sep 9 00:06:54.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:06:55.029: INFO: namespace emptydir-7670 deletion completed in 6.090392107s • [SLOW TEST:10.391 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:06:55.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Sep 9 00:06:55.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9734' Sep 9 00:06:58.069: INFO: stderr: "" Sep 9 00:06:58.069: INFO: stdout: "pod/pause created\n" Sep 9 00:06:58.069: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 9 00:06:58.069: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9734" to be "running and ready" Sep 9 00:06:58.109: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 40.569559ms Sep 9 00:07:00.113: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044088254s Sep 9 00:07:02.117: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.04815337s Sep 9 00:07:02.117: INFO: Pod "pause" satisfied condition "running and ready" Sep 9 00:07:02.117: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Sep 9 00:07:02.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9734' Sep 9 00:07:02.216: INFO: stderr: "" Sep 9 00:07:02.216: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 9 00:07:02.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9734' Sep 9 00:07:02.310: INFO: stderr: "" Sep 9 00:07:02.310: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 9 00:07:02.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9734' Sep 9 00:07:02.412: INFO: stderr: "" Sep 9 00:07:02.412: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 9 00:07:02.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9734' Sep 9 00:07:02.507: INFO: stderr: "" Sep 9 00:07:02.507: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Sep 9 00:07:02.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9734' Sep 9 00:07:02.632: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 9 00:07:02.632: INFO: stdout: "pod \"pause\" force deleted\n" Sep 9 00:07:02.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9734' Sep 9 00:07:03.023: INFO: stderr: "No resources found.\n" Sep 9 00:07:03.023: INFO: stdout: "" Sep 9 00:07:03.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9734 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 9 00:07:03.132: INFO: stderr: "" Sep 9 00:07:03.132: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:07:03.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9734" for this suite. Sep 9 00:07:09.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:07:09.235: INFO: namespace kubectl-9734 deletion completed in 6.098542022s • [SLOW TEST:14.206 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:07:09.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Sep 9 00:07:09.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6ef5ee0-d4df-4e77-b7a4-90a546d099af" in namespace "projected-3732" to be "success or failure" Sep 9 00:07:09.310: INFO: Pod "downwardapi-volume-f6ef5ee0-d4df-4e77-b7a4-90a546d099af": Phase="Pending", Reason="", readiness=false. Elapsed: 20.800483ms Sep 9 00:07:11.338: INFO: Pod "downwardapi-volume-f6ef5ee0-d4df-4e77-b7a4-90a546d099af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049060549s Sep 9 00:07:13.341: INFO: Pod "downwardapi-volume-f6ef5ee0-d4df-4e77-b7a4-90a546d099af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052679945s STEP: Saw pod success Sep 9 00:07:13.342: INFO: Pod "downwardapi-volume-f6ef5ee0-d4df-4e77-b7a4-90a546d099af" satisfied condition "success or failure" Sep 9 00:07:13.344: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f6ef5ee0-d4df-4e77-b7a4-90a546d099af container client-container: STEP: delete the pod Sep 9 00:07:13.360: INFO: Waiting for pod downwardapi-volume-f6ef5ee0-d4df-4e77-b7a4-90a546d099af to disappear Sep 9 00:07:13.381: INFO: Pod downwardapi-volume-f6ef5ee0-d4df-4e77-b7a4-90a546d099af no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:07:13.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3732" for this suite. Sep 9 00:07:19.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:07:19.471: INFO: namespace projected-3732 deletion completed in 6.086358738s • [SLOW TEST:10.234 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:07:19.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 9 00:07:23.564: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:07:23.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-770" for this suite. Sep 9 00:07:29.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:07:29.666: INFO: namespace container-runtime-770 deletion completed in 6.087693138s • [SLOW TEST:10.195 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:07:29.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-87b9fd62-613a-4895-8ad8-39e579890d27 STEP: Creating configMap with name cm-test-opt-upd-b9ef4083-47de-4c9a-a086-ea1ddaa70082 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-87b9fd62-613a-4895-8ad8-39e579890d27 STEP: Updating configmap cm-test-opt-upd-b9ef4083-47de-4c9a-a086-ea1ddaa70082 STEP: Creating configMap with name cm-test-opt-create-8cee683b-a14a-4a78-b8d2-754bc28d760e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:07:37.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1343" for this suite. Sep 9 00:08:01.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:08:01.967: INFO: namespace projected-1343 deletion completed in 24.111378936s • [SLOW TEST:32.300 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:08:01.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Sep 9 00:08:02.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7b6c700-aa0e-4850-85aa-0724f66d8de1" in namespace "downward-api-6524" to be "success or failure" Sep 9 00:08:02.060: INFO: Pod "downwardapi-volume-a7b6c700-aa0e-4850-85aa-0724f66d8de1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.298186ms Sep 9 00:08:04.065: INFO: Pod "downwardapi-volume-a7b6c700-aa0e-4850-85aa-0724f66d8de1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007685782s Sep 9 00:08:06.068: INFO: Pod "downwardapi-volume-a7b6c700-aa0e-4850-85aa-0724f66d8de1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01131785s STEP: Saw pod success Sep 9 00:08:06.068: INFO: Pod "downwardapi-volume-a7b6c700-aa0e-4850-85aa-0724f66d8de1" satisfied condition "success or failure" Sep 9 00:08:06.071: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a7b6c700-aa0e-4850-85aa-0724f66d8de1 container client-container: STEP: delete the pod Sep 9 00:08:06.094: INFO: Waiting for pod downwardapi-volume-a7b6c700-aa0e-4850-85aa-0724f66d8de1 to disappear Sep 9 00:08:06.115: INFO: Pod downwardapi-volume-a7b6c700-aa0e-4850-85aa-0724f66d8de1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:08:06.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6524" for this suite. Sep 9 00:08:12.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:08:12.212: INFO: namespace downward-api-6524 deletion completed in 6.093360961s • [SLOW TEST:10.245 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:08:12.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5139, will wait for the garbage collector to delete the pods Sep 9 00:08:16.337: INFO: Deleting Job.batch foo took: 6.503547ms Sep 9 00:08:16.637: INFO: Terminating Job.batch foo pods took: 300.268353ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:08:53.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5139" for this suite. Sep 9 00:08:59.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:08:59.898: INFO: namespace job-5139 deletion completed in 6.152964628s • [SLOW TEST:47.685 seconds] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:08:59.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Sep 9 00:09:04.509: INFO: Successfully updated pod "labelsupdatef943e30e-18c5-4f11-ac17-303f248559b3" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:09:08.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3445" for this suite. Sep 9 00:09:30.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:09:30.646: INFO: namespace projected-3445 deletion completed in 22.094181861s • [SLOW TEST:30.748 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:09:30.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Sep 9 00:09:30.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-9171' Sep 9 00:09:30.794: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 9 00:09:30.794: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Sep 9 00:09:34.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9171' Sep 9 00:09:35.017: INFO: stderr: "" Sep 9 00:09:35.017: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:09:35.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9171" for this suite. Sep 9 00:09:57.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:09:57.139: INFO: namespace kubectl-9171 deletion completed in 22.119016037s • [SLOW TEST:26.493 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:09:57.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-31363486-3346-4a6f-b71c-029c5998339c STEP: Creating a pod to test consume configMaps Sep 9 00:09:57.228: INFO: Waiting up to 5m0s for pod "pod-configmaps-267b0560-9205-432b-92bc-5e66e4b15e73" in namespace "configmap-917" to be "success or failure" Sep 9 00:09:57.246: INFO: Pod "pod-configmaps-267b0560-9205-432b-92bc-5e66e4b15e73": Phase="Pending", Reason="", readiness=false. Elapsed: 18.273351ms Sep 9 00:09:59.251: INFO: Pod "pod-configmaps-267b0560-9205-432b-92bc-5e66e4b15e73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022795842s Sep 9 00:10:01.254: INFO: Pod "pod-configmaps-267b0560-9205-432b-92bc-5e66e4b15e73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026609287s STEP: Saw pod success Sep 9 00:10:01.254: INFO: Pod "pod-configmaps-267b0560-9205-432b-92bc-5e66e4b15e73" satisfied condition "success or failure" Sep 9 00:10:01.257: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-267b0560-9205-432b-92bc-5e66e4b15e73 container configmap-volume-test: STEP: delete the pod Sep 9 00:10:01.299: INFO: Waiting for pod pod-configmaps-267b0560-9205-432b-92bc-5e66e4b15e73 to disappear Sep 9 00:10:01.359: INFO: Pod pod-configmaps-267b0560-9205-432b-92bc-5e66e4b15e73 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:10:01.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-917" for this suite. Sep 9 00:10:07.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:10:07.633: INFO: namespace configmap-917 deletion completed in 6.270215254s • [SLOW TEST:10.493 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:10:07.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-4b5c4cd8-187e-4c32-b544-4029f1207afc STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4b5c4cd8-187e-4c32-b544-4029f1207afc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:11:38.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9637" for this suite. Sep 9 00:11:54.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:11:54.267: INFO: namespace configmap-9637 deletion completed in 16.133487433s • [SLOW TEST:106.633 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:11:54.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2822 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 9 00:11:54.328: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 9 00:12:20.463: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.92 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2822 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 00:12:20.463: INFO: >>> kubeConfig: /root/.kube/config I0909 00:12:20.489057 6 log.go:172] (0xc001043970) (0xc00236a0a0) Create stream I0909 00:12:20.489082 6 log.go:172] (0xc001043970) (0xc00236a0a0) Stream added, broadcasting: 1 I0909 00:12:20.491479 6 log.go:172] (0xc001043970) Reply frame received for 1 I0909 00:12:20.491519 6 log.go:172] (0xc001043970) (0xc0025760a0) Create stream I0909 00:12:20.491531 6 log.go:172] (0xc001043970) (0xc0025760a0) Stream added, broadcasting: 3 I0909 00:12:20.492640 6 log.go:172] (0xc001043970) Reply frame received for 3 I0909 00:12:20.492697 6 log.go:172] (0xc001043970) (0xc000a74780) Create stream I0909 00:12:20.492713 6 log.go:172] (0xc001043970) (0xc000a74780) Stream added, broadcasting: 5 I0909 00:12:20.493742 6 log.go:172] (0xc001043970) Reply frame received for 5 I0909 00:12:21.570323 6 log.go:172] (0xc001043970) Data frame received for 3 I0909 00:12:21.570374 6 log.go:172] (0xc0025760a0) (3) Data frame handling I0909 00:12:21.570408 6 log.go:172] (0xc0025760a0) (3) Data frame sent I0909 00:12:21.570443 6 log.go:172] (0xc001043970) Data frame received for 3 I0909 00:12:21.570459 6 log.go:172] (0xc0025760a0) (3) Data frame handling I0909 00:12:21.571011 6 log.go:172] (0xc001043970) Data frame received for 5 I0909 00:12:21.571041 6 log.go:172] (0xc000a74780) (5) Data frame handling I0909 00:12:21.574176 6 log.go:172] (0xc001043970) Data frame received for 1 I0909 00:12:21.574222 6 log.go:172] (0xc00236a0a0) (1) Data frame handling I0909 00:12:21.574259 6 log.go:172] (0xc00236a0a0) (1) Data frame sent I0909 00:12:21.574302 6 log.go:172] (0xc001043970) (0xc00236a0a0) Stream removed, broadcasting: 1 I0909 00:12:21.574360 6 log.go:172] (0xc001043970) Go away received I0909 00:12:21.574480 6 log.go:172] (0xc001043970) (0xc00236a0a0) Stream removed, broadcasting: 1 I0909 00:12:21.574513 6 log.go:172] (0xc001043970) (0xc0025760a0) Stream removed, broadcasting: 3 I0909 00:12:21.574526 6 log.go:172] (0xc001043970) (0xc000a74780) Stream removed, broadcasting: 5 Sep 9 00:12:21.574: INFO: Found all expected endpoints: [netserver-0] Sep 9 00:12:21.578: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.41 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2822 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 00:12:21.578: INFO: >>> kubeConfig: /root/.kube/config I0909 00:12:21.613348 6 log.go:172] (0xc000eca8f0) (0xc0025761e0) Create stream I0909 00:12:21.613378 6 log.go:172] (0xc000eca8f0) (0xc0025761e0) Stream added, broadcasting: 1 I0909 00:12:21.616098 6 log.go:172] (0xc000eca8f0) Reply frame received for 1 I0909 00:12:21.616144 6 log.go:172] (0xc000eca8f0) (0xc001d2cf00) Create stream I0909 00:12:21.616159 6 log.go:172] (0xc000eca8f0) (0xc001d2cf00) Stream added, broadcasting: 3 I0909 00:12:21.617131 6 log.go:172] (0xc000eca8f0) Reply frame received for 3 I0909 00:12:21.617155 6 log.go:172] (0xc000eca8f0) (0xc000211a40) Create stream I0909 00:12:21.617165 6 log.go:172] (0xc000eca8f0) (0xc000211a40) Stream added, broadcasting: 5 I0909 00:12:21.618032 6 log.go:172] (0xc000eca8f0) Reply frame received for 5 I0909 00:12:22.692850 6 log.go:172] (0xc000eca8f0) Data frame received for 3 I0909 00:12:22.692897 6 log.go:172] (0xc001d2cf00) (3) Data frame handling I0909 00:12:22.692942 6 log.go:172] (0xc001d2cf00) (3) Data frame sent I0909 00:12:22.693329 6 log.go:172] (0xc000eca8f0) Data frame received for 3 I0909 00:12:22.693368 6 log.go:172] (0xc001d2cf00) (3) Data frame handling I0909 00:12:22.693406 6 log.go:172] (0xc000eca8f0) Data frame received for 5 I0909 00:12:22.693427 6 log.go:172] (0xc000211a40) (5) Data frame handling I0909 00:12:22.694950 6 log.go:172] (0xc000eca8f0) Data frame received for 1 I0909 00:12:22.694973 6 log.go:172] (0xc0025761e0) (1) Data frame handling I0909 00:12:22.695000 6 log.go:172] (0xc0025761e0) (1) Data frame sent I0909 00:12:22.695020 6 log.go:172] (0xc000eca8f0) (0xc0025761e0) Stream removed, broadcasting: 1 I0909 00:12:22.695095 6 log.go:172] (0xc000eca8f0) Go away received I0909 00:12:22.695142 6 log.go:172] (0xc000eca8f0) (0xc0025761e0) Stream removed, broadcasting: 1 I0909 00:12:22.695167 6 log.go:172] (0xc000eca8f0) (0xc001d2cf00) Stream removed, broadcasting: 3 I0909 00:12:22.695189 6 log.go:172] (0xc000eca8f0) (0xc000211a40) Stream removed, broadcasting: 5 Sep 9 00:12:22.695: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:12:22.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2822" for this suite. Sep 9 00:12:46.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:12:46.799: INFO: namespace pod-network-test-2822 deletion completed in 24.099482216s • [SLOW TEST:52.532 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:12:46.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4710/secret-test-f7b2374f-070f-4c1a-a9f5-2c606aa89081 STEP: Creating a pod to test consume secrets Sep 9 00:12:46.878: INFO: Waiting up to 5m0s for pod "pod-configmaps-9992d79b-344c-4955-bb84-d0721f3b5267" in namespace "secrets-4710" to be "success or failure" Sep 9 00:12:46.889: INFO: Pod "pod-configmaps-9992d79b-344c-4955-bb84-d0721f3b5267": Phase="Pending", Reason="", readiness=false. Elapsed: 10.141393ms Sep 9 00:12:48.929: INFO: Pod "pod-configmaps-9992d79b-344c-4955-bb84-d0721f3b5267": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050776296s Sep 9 00:12:50.953: INFO: Pod "pod-configmaps-9992d79b-344c-4955-bb84-d0721f3b5267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074721915s STEP: Saw pod success Sep 9 00:12:50.953: INFO: Pod "pod-configmaps-9992d79b-344c-4955-bb84-d0721f3b5267" satisfied condition "success or failure" Sep 9 00:12:50.956: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-9992d79b-344c-4955-bb84-d0721f3b5267 container env-test: STEP: delete the pod Sep 9 00:12:51.002: INFO: Waiting for pod pod-configmaps-9992d79b-344c-4955-bb84-d0721f3b5267 to disappear Sep 9 00:12:51.014: INFO: Pod pod-configmaps-9992d79b-344c-4955-bb84-d0721f3b5267 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:12:51.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4710" for this suite. Sep 9 00:12:57.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:12:57.112: INFO: namespace secrets-4710 deletion completed in 6.09454661s • [SLOW TEST:10.312 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:12:57.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d1070f26-ac00-453c-8a47-a782b08a183a STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d1070f26-ac00-453c-8a47-a782b08a183a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:13:03.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6161" for this suite. Sep 9 00:13:25.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:13:25.368: INFO: namespace projected-6161 deletion completed in 22.090515772s • [SLOW TEST:28.255 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:13:25.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 9 00:13:29.999: INFO: Successfully updated pod "pod-update-cdeb9d8e-3c20-4a6c-a6db-87b59ccdbebb" STEP: verifying the updated pod is in kubernetes Sep 9 00:13:30.007: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:13:30.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6471" for this suite. Sep 9 00:13:52.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:13:52.139: INFO: namespace pods-6471 deletion completed in 22.130129589s • [SLOW TEST:26.772 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:13:52.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Sep 9 00:13:58.985: INFO: 0 pods remaining Sep 9 00:13:58.985: INFO: 0 pods has nil DeletionTimestamp Sep 9 00:13:58.985: INFO: STEP: Gathering metrics W0909 00:13:59.704596 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 9 00:13:59.704: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:13:59.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2547" for this suite. Sep 9 00:14:05.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:14:05.841: INFO: namespace gc-2547 deletion completed in 6.133621086s • [SLOW TEST:13.700 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:14:05.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-159.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-159.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-159.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-159.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-159.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 25.30.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.30.25_udp@PTR;check="$$(dig +tcp +noall +answer +search 25.30.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.30.25_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-159.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-159.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-159.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-159.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-159.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 25.30.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.30.25_udp@PTR;check="$$(dig +tcp +noall +answer +search 25.30.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.30.25_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 9 00:14:12.109: INFO: Unable to read wheezy_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:12.112: INFO: Unable to read wheezy_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:12.115: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:12.117: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:12.137: INFO: Unable to read jessie_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:12.139: INFO: Unable to read jessie_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:12.142: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:12.145: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:12.159: INFO: Lookups using dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012 failed for: [wheezy_udp@dns-test-service.dns-159.svc.cluster.local wheezy_tcp@dns-test-service.dns-159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_udp@dns-test-service.dns-159.svc.cluster.local jessie_tcp@dns-test-service.dns-159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local] Sep 9 00:14:17.164: INFO: Unable to read wheezy_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:17.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:17.172: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:17.175: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:17.195: INFO: Unable to read jessie_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:17.198: INFO: Unable to read jessie_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:17.201: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:17.204: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:17.222: INFO: Lookups using dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012 failed for: [wheezy_udp@dns-test-service.dns-159.svc.cluster.local wheezy_tcp@dns-test-service.dns-159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_udp@dns-test-service.dns-159.svc.cluster.local jessie_tcp@dns-test-service.dns-159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local] Sep 9 00:14:22.165: INFO: Unable to read wheezy_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:22.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:22.172: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:22.176: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:22.197: INFO: Unable to read jessie_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:22.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:22.202: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:22.205: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:22.224: INFO: Lookups using dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012 failed for: [wheezy_udp@dns-test-service.dns-159.svc.cluster.local wheezy_tcp@dns-test-service.dns-159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_udp@dns-test-service.dns-159.svc.cluster.local jessie_tcp@dns-test-service.dns-159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local] Sep 9 00:14:27.164: INFO: Unable to read wheezy_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:27.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:27.171: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:27.175: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:27.199: INFO: Unable to read jessie_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:27.201: INFO: Unable to read jessie_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:27.204: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:27.207: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:27.225: INFO: Lookups using dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012 failed for: [wheezy_udp@dns-test-service.dns-159.svc.cluster.local wheezy_tcp@dns-test-service.dns-159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_udp@dns-test-service.dns-159.svc.cluster.local jessie_tcp@dns-test-service.dns-159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local] Sep 9 00:14:32.164: INFO: Unable to read wheezy_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:32.167: INFO: Unable to read wheezy_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:32.171: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:32.174: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:32.197: INFO: Unable to read jessie_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:32.201: INFO: Unable to read jessie_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:32.204: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:32.207: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:32.224: INFO: Lookups using dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012 failed for: [wheezy_udp@dns-test-service.dns-159.svc.cluster.local wheezy_tcp@dns-test-service.dns-159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_udp@dns-test-service.dns-159.svc.cluster.local jessie_tcp@dns-test-service.dns-159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local] Sep 9 00:14:37.164: INFO: Unable to read wheezy_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:37.167: INFO: Unable to read wheezy_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:37.170: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:37.173: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:37.195: INFO: Unable to read jessie_udp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:37.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:37.202: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:37.205: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local from pod dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012: the server could not find the requested resource (get pods dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012) Sep 9 00:14:37.224: INFO: Lookups using dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012 failed for: [wheezy_udp@dns-test-service.dns-159.svc.cluster.local wheezy_tcp@dns-test-service.dns-159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_udp@dns-test-service.dns-159.svc.cluster.local jessie_tcp@dns-test-service.dns-159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-159.svc.cluster.local] Sep 9 00:14:42.222: INFO: DNS probes using dns-159/dns-test-59b20db5-0271-4a0d-a6eb-5ac0e16be012 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:14:42.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-159" for this suite. Sep 9 00:14:49.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:14:49.091: INFO: namespace dns-159 deletion completed in 6.102467435s • [SLOW TEST:43.250 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:14:49.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-521057b5-7339-40ad-a0c3-1a54ae47be8d STEP: Creating a pod to test consume secrets Sep 9 00:14:49.193: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-889aaed6-68eb-4c84-9c30-abb65f158d17" in namespace "projected-6578" to be "success or failure" Sep 9 00:14:49.209: INFO: Pod "pod-projected-secrets-889aaed6-68eb-4c84-9c30-abb65f158d17": Phase="Pending", Reason="", readiness=false. Elapsed: 16.167261ms Sep 9 00:14:51.214: INFO: Pod "pod-projected-secrets-889aaed6-68eb-4c84-9c30-abb65f158d17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020437023s Sep 9 00:14:53.217: INFO: Pod "pod-projected-secrets-889aaed6-68eb-4c84-9c30-abb65f158d17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023830964s STEP: Saw pod success Sep 9 00:14:53.217: INFO: Pod "pod-projected-secrets-889aaed6-68eb-4c84-9c30-abb65f158d17" satisfied condition "success or failure" Sep 9 00:14:53.219: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-889aaed6-68eb-4c84-9c30-abb65f158d17 container projected-secret-volume-test: STEP: delete the pod Sep 9 00:14:53.271: INFO: Waiting for pod pod-projected-secrets-889aaed6-68eb-4c84-9c30-abb65f158d17 to disappear Sep 9 00:14:53.341: INFO: Pod pod-projected-secrets-889aaed6-68eb-4c84-9c30-abb65f158d17 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:14:53.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6578" for this suite. Sep 9 00:14:59.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:14:59.533: INFO: namespace projected-6578 deletion completed in 6.187887089s • [SLOW TEST:10.441 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:14:59.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Sep 9 00:14:59.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1942 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Sep 9 00:15:02.979: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0909 00:15:02.887196 514 log.go:172] (0xc0009be210) (0xc000354140) Create stream\nI0909 00:15:02.887270 514 log.go:172] (0xc0009be210) (0xc000354140) Stream added, broadcasting: 1\nI0909 00:15:02.889952 514 log.go:172] (0xc0009be210) Reply frame received for 1\nI0909 00:15:02.889999 514 log.go:172] (0xc0009be210) (0xc0003541e0) Create stream\nI0909 00:15:02.890011 514 log.go:172] (0xc0009be210) (0xc0003541e0) Stream added, broadcasting: 3\nI0909 00:15:02.890916 514 log.go:172] (0xc0009be210) Reply frame received for 3\nI0909 00:15:02.890955 514 log.go:172] (0xc0009be210) (0xc0007c45a0) Create stream\nI0909 00:15:02.890966 514 log.go:172] (0xc0009be210) (0xc0007c45a0) Stream added, broadcasting: 5\nI0909 00:15:02.891970 514 log.go:172] (0xc0009be210) Reply frame received for 5\nI0909 00:15:02.892072 514 log.go:172] (0xc0009be210) (0xc0003b2000) Create stream\nI0909 00:15:02.892089 514 log.go:172] (0xc0009be210) (0xc0003b2000) Stream added, broadcasting: 7\nI0909 00:15:02.892860 514 log.go:172] (0xc0009be210) Reply frame received for 7\nI0909 00:15:02.892984 514 log.go:172] (0xc0003541e0) (3) Writing data frame\nI0909 00:15:02.893072 514 log.go:172] (0xc0003541e0) (3) Writing data frame\nI0909 00:15:02.893803 514 log.go:172] (0xc0009be210) Data frame received for 5\nI0909 00:15:02.893821 514 log.go:172] (0xc0007c45a0) (5) Data frame handling\nI0909 00:15:02.893839 514 log.go:172] (0xc0007c45a0) (5) Data frame sent\nI0909 00:15:02.894375 514 log.go:172] (0xc0009be210) Data frame received for 5\nI0909 00:15:02.894391 514 log.go:172] (0xc0007c45a0) (5) Data frame handling\nI0909 00:15:02.894401 514 log.go:172] (0xc0007c45a0) (5) Data frame sent\nI0909 00:15:02.925346 514 log.go:172] (0xc0009be210) Data frame received for 7\nI0909 00:15:02.925370 514 log.go:172] (0xc0003b2000) (7) Data frame handling\nI0909 00:15:02.925458 514 log.go:172] (0xc0009be210) Data frame received for 5\nI0909 00:15:02.925473 514 log.go:172] (0xc0007c45a0) (5) Data frame handling\nI0909 00:15:02.925904 514 log.go:172] (0xc0009be210) Data frame received for 1\nI0909 00:15:02.925937 514 log.go:172] (0xc000354140) (1) Data frame handling\nI0909 00:15:02.925961 514 log.go:172] (0xc000354140) (1) Data frame sent\nI0909 00:15:02.925981 514 log.go:172] (0xc0009be210) (0xc000354140) Stream removed, broadcasting: 1\nI0909 00:15:02.926068 514 log.go:172] (0xc0009be210) (0xc0003541e0) Stream removed, broadcasting: 3\nI0909 00:15:02.926168 514 log.go:172] (0xc0009be210) (0xc000354140) Stream removed, broadcasting: 1\nI0909 00:15:02.926206 514 log.go:172] (0xc0009be210) (0xc0003541e0) Stream removed, broadcasting: 3\nI0909 00:15:02.926231 514 log.go:172] (0xc0009be210) (0xc0007c45a0) Stream removed, broadcasting: 5\nI0909 00:15:02.926267 514 log.go:172] (0xc0009be210) (0xc0003b2000) Stream removed, broadcasting: 7\nI0909 00:15:02.926354 514 log.go:172] (0xc0009be210) Go away received\n" Sep 9 00:15:02.979: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:15:04.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1942" for this suite. Sep 9 00:15:15.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:15:15.090: INFO: namespace kubectl-1942 deletion completed in 10.099680243s • [SLOW TEST:15.557 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:15:15.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Sep 9 00:15:19.696: INFO: Successfully updated pod "annotationupdatea21ed42b-2357-49da-96f5-ad55d57cf1d3" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:15:21.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1195" for this suite. Sep 9 00:15:43.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:15:43.871: INFO: namespace projected-1195 deletion completed in 22.115905302s • [SLOW TEST:28.781 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:15:43.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-97937f06-1c0a-40a8-886c-2d285e751ae3 in namespace container-probe-913 Sep 9 00:15:47.948: INFO: Started pod liveness-97937f06-1c0a-40a8-886c-2d285e751ae3 in namespace container-probe-913 STEP: checking the pod's current state and verifying that restartCount is present Sep 9 00:15:47.951: INFO: Initial restart count of pod liveness-97937f06-1c0a-40a8-886c-2d285e751ae3 is 0 Sep 9 00:16:10.159: INFO: Restart count of pod container-probe-913/liveness-97937f06-1c0a-40a8-886c-2d285e751ae3 is now 1 (22.208147206s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 9 00:16:10.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-913" for this suite. Sep 9 00:16:16.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 00:16:16.294: INFO: namespace container-probe-913 deletion completed in 6.11762571s • [SLOW TEST:32.423 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 9 00:16:16.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Sep 9 00:16:16.369: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-c786d0a4-0d6c-4688-baa7-30b6685ac40f
STEP: Creating a pod to test consume configMaps
Sep  9 00:16:22.668: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a261dbe-4bed-4ef6-9be7-347dfb96af9e" in namespace "configmap-6797" to be "success or failure"
Sep  9 00:16:22.672: INFO: Pod "pod-configmaps-7a261dbe-4bed-4ef6-9be7-347dfb96af9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087548ms
Sep  9 00:16:24.676: INFO: Pod "pod-configmaps-7a261dbe-4bed-4ef6-9be7-347dfb96af9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008146846s
Sep  9 00:16:26.680: INFO: Pod "pod-configmaps-7a261dbe-4bed-4ef6-9be7-347dfb96af9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012148668s
STEP: Saw pod success
Sep  9 00:16:26.680: INFO: Pod "pod-configmaps-7a261dbe-4bed-4ef6-9be7-347dfb96af9e" satisfied condition "success or failure"
Sep  9 00:16:26.683: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7a261dbe-4bed-4ef6-9be7-347dfb96af9e container configmap-volume-test: 
STEP: delete the pod
Sep  9 00:16:26.708: INFO: Waiting for pod pod-configmaps-7a261dbe-4bed-4ef6-9be7-347dfb96af9e to disappear
Sep  9 00:16:26.710: INFO: Pod pod-configmaps-7a261dbe-4bed-4ef6-9be7-347dfb96af9e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:16:26.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6797" for this suite.
Sep  9 00:16:32.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:16:32.823: INFO: namespace configmap-6797 deletion completed in 6.110201607s

• [SLOW TEST:10.258 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:16:32.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-f5f71a92-7db4-4df9-a2e3-092b8a2a0521
STEP: Creating a pod to test consume configMaps
Sep  9 00:16:32.923: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-665d1927-ae3d-4ec1-bed1-a2fed3028c5d" in namespace "projected-501" to be "success or failure"
Sep  9 00:16:32.967: INFO: Pod "pod-projected-configmaps-665d1927-ae3d-4ec1-bed1-a2fed3028c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 43.535356ms
Sep  9 00:16:34.970: INFO: Pod "pod-projected-configmaps-665d1927-ae3d-4ec1-bed1-a2fed3028c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046863499s
Sep  9 00:16:36.974: INFO: Pod "pod-projected-configmaps-665d1927-ae3d-4ec1-bed1-a2fed3028c5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050185157s
STEP: Saw pod success
Sep  9 00:16:36.974: INFO: Pod "pod-projected-configmaps-665d1927-ae3d-4ec1-bed1-a2fed3028c5d" satisfied condition "success or failure"
Sep  9 00:16:36.977: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-665d1927-ae3d-4ec1-bed1-a2fed3028c5d container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 00:16:37.017: INFO: Waiting for pod pod-projected-configmaps-665d1927-ae3d-4ec1-bed1-a2fed3028c5d to disappear
Sep  9 00:16:37.019: INFO: Pod pod-projected-configmaps-665d1927-ae3d-4ec1-bed1-a2fed3028c5d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:16:37.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-501" for this suite.
Sep  9 00:16:43.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:16:43.109: INFO: namespace projected-501 deletion completed in 6.086149104s

• [SLOW TEST:10.286 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:16:43.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Sep  9 00:16:43.166: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-a,UID:e663e1bb-dc80-42cd-9a20-22a0c2a75d3b,ResourceVersion:318963,Generation:0,CreationTimestamp:2020-09-09 00:16:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep  9 00:16:43.166: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-a,UID:e663e1bb-dc80-42cd-9a20-22a0c2a75d3b,ResourceVersion:318963,Generation:0,CreationTimestamp:2020-09-09 00:16:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Sep  9 00:16:53.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-a,UID:e663e1bb-dc80-42cd-9a20-22a0c2a75d3b,ResourceVersion:318984,Generation:0,CreationTimestamp:2020-09-09 00:16:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Sep  9 00:16:53.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-a,UID:e663e1bb-dc80-42cd-9a20-22a0c2a75d3b,ResourceVersion:318984,Generation:0,CreationTimestamp:2020-09-09 00:16:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Sep  9 00:17:03.183: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-a,UID:e663e1bb-dc80-42cd-9a20-22a0c2a75d3b,ResourceVersion:319004,Generation:0,CreationTimestamp:2020-09-09 00:16:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep  9 00:17:03.183: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-a,UID:e663e1bb-dc80-42cd-9a20-22a0c2a75d3b,ResourceVersion:319004,Generation:0,CreationTimestamp:2020-09-09 00:16:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Sep  9 00:17:13.190: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-a,UID:e663e1bb-dc80-42cd-9a20-22a0c2a75d3b,ResourceVersion:319025,Generation:0,CreationTimestamp:2020-09-09 00:16:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep  9 00:17:13.190: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-a,UID:e663e1bb-dc80-42cd-9a20-22a0c2a75d3b,ResourceVersion:319025,Generation:0,CreationTimestamp:2020-09-09 00:16:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Sep  9 00:17:23.199: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-b,UID:db7c95d3-7fe3-4885-aaf5-c116e452c2e3,ResourceVersion:319045,Generation:0,CreationTimestamp:2020-09-09 00:17:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep  9 00:17:23.199: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-b,UID:db7c95d3-7fe3-4885-aaf5-c116e452c2e3,ResourceVersion:319045,Generation:0,CreationTimestamp:2020-09-09 00:17:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Sep  9 00:17:33.206: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-b,UID:db7c95d3-7fe3-4885-aaf5-c116e452c2e3,ResourceVersion:319065,Generation:0,CreationTimestamp:2020-09-09 00:17:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep  9 00:17:33.206: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8398,SelfLink:/api/v1/namespaces/watch-8398/configmaps/e2e-watch-test-configmap-b,UID:db7c95d3-7fe3-4885-aaf5-c116e452c2e3,ResourceVersion:319065,Generation:0,CreationTimestamp:2020-09-09 00:17:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:17:43.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8398" for this suite.
Sep  9 00:17:49.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:17:49.337: INFO: namespace watch-8398 deletion completed in 6.125470912s

• [SLOW TEST:66.228 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:17:49.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Sep  9 00:17:49.390: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep  9 00:17:49.422: INFO: Waiting for terminating namespaces to be deleted...
Sep  9 00:17:49.430: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Sep  9 00:17:49.434: INFO: kube-proxy-7tdlb from kube-system started at 2020-09-07 19:17:06 +0000 UTC (1 container statuses recorded)
Sep  9 00:17:49.434: INFO: 	Container kube-proxy ready: true, restart count 0
Sep  9 00:17:49.434: INFO: kindnet-l8ltc from kube-system started at 2020-09-07 19:17:06 +0000 UTC (1 container statuses recorded)
Sep  9 00:17:49.434: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  9 00:17:49.434: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Sep  9 00:17:49.439: INFO: kube-proxy-hwdzp from kube-system started at 2020-09-07 19:16:55 +0000 UTC (1 container statuses recorded)
Sep  9 00:17:49.439: INFO: 	Container kube-proxy ready: true, restart count 0
Sep  9 00:17:49.439: INFO: kindnet-mnblj from kube-system started at 2020-09-07 19:16:56 +0000 UTC (1 container statuses recorded)
Sep  9 00:17:49.439: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  9 00:17:49.439: INFO: coredns-5d4dd4b4db-25mzm from kube-system started at 2020-09-07 19:17:27 +0000 UTC (1 container statuses recorded)
Sep  9 00:17:49.439: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-d22d12d1-380c-4a12-92a0-537083634797 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-d22d12d1-380c-4a12-92a0-537083634797 off the node iruya-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-d22d12d1-380c-4a12-92a0-537083634797
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:17:57.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1889" for this suite.
Sep  9 00:18:15.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:18:15.653: INFO: namespace sched-pred-1889 deletion completed in 18.074862925s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:26.316 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:18:15.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:18:21.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9692" for this suite.
Sep  9 00:18:28.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:18:28.097: INFO: namespace namespaces-9692 deletion completed in 6.118832767s
STEP: Destroying namespace "nsdeletetest-6926" for this suite.
Sep  9 00:18:28.099: INFO: Namespace nsdeletetest-6926 was already deleted
STEP: Destroying namespace "nsdeletetest-6676" for this suite.
Sep  9 00:18:34.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:18:34.205: INFO: namespace nsdeletetest-6676 deletion completed in 6.105801096s

• [SLOW TEST:18.551 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:18:34.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:18:39.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9027" for this suite.
Sep  9 00:18:45.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:18:45.943: INFO: namespace watch-9027 deletion completed in 6.182058705s

• [SLOW TEST:11.738 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:18:45.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Sep  9 00:18:46.026: INFO: Waiting up to 5m0s for pod "var-expansion-0b3cb7ae-d3a9-4a6d-a54a-e7316f44e2f0" in namespace "var-expansion-2015" to be "success or failure"
Sep  9 00:18:46.040: INFO: Pod "var-expansion-0b3cb7ae-d3a9-4a6d-a54a-e7316f44e2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.748422ms
Sep  9 00:18:48.045: INFO: Pod "var-expansion-0b3cb7ae-d3a9-4a6d-a54a-e7316f44e2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018199963s
Sep  9 00:18:50.048: INFO: Pod "var-expansion-0b3cb7ae-d3a9-4a6d-a54a-e7316f44e2f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022050422s
STEP: Saw pod success
Sep  9 00:18:50.049: INFO: Pod "var-expansion-0b3cb7ae-d3a9-4a6d-a54a-e7316f44e2f0" satisfied condition "success or failure"
Sep  9 00:18:50.051: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-0b3cb7ae-d3a9-4a6d-a54a-e7316f44e2f0 container dapi-container: 
STEP: delete the pod
Sep  9 00:18:50.072: INFO: Waiting for pod var-expansion-0b3cb7ae-d3a9-4a6d-a54a-e7316f44e2f0 to disappear
Sep  9 00:18:50.077: INFO: Pod var-expansion-0b3cb7ae-d3a9-4a6d-a54a-e7316f44e2f0 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:18:50.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2015" for this suite.
Sep  9 00:18:56.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:18:56.168: INFO: namespace var-expansion-2015 deletion completed in 6.088002488s

• [SLOW TEST:10.224 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:18:56.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:18:56.283: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Sep  9 00:18:56.293: INFO: Number of nodes with available pods: 0
Sep  9 00:18:56.293: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Sep  9 00:18:56.415: INFO: Number of nodes with available pods: 0
Sep  9 00:18:56.415: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:18:57.419: INFO: Number of nodes with available pods: 0
Sep  9 00:18:57.419: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:18:58.570: INFO: Number of nodes with available pods: 0
Sep  9 00:18:58.570: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:18:59.420: INFO: Number of nodes with available pods: 0
Sep  9 00:18:59.420: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:19:00.420: INFO: Number of nodes with available pods: 1
Sep  9 00:19:00.420: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Sep  9 00:19:00.450: INFO: Number of nodes with available pods: 1
Sep  9 00:19:00.450: INFO: Number of running nodes: 0, number of available pods: 1
Sep  9 00:19:01.455: INFO: Number of nodes with available pods: 0
Sep  9 00:19:01.455: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Sep  9 00:19:01.466: INFO: Number of nodes with available pods: 0
Sep  9 00:19:01.466: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:19:02.470: INFO: Number of nodes with available pods: 0
Sep  9 00:19:02.470: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:19:03.469: INFO: Number of nodes with available pods: 0
Sep  9 00:19:03.469: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:19:04.470: INFO: Number of nodes with available pods: 0
Sep  9 00:19:04.470: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:19:05.470: INFO: Number of nodes with available pods: 0
Sep  9 00:19:05.471: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:19:06.492: INFO: Number of nodes with available pods: 0
Sep  9 00:19:06.492: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:19:07.470: INFO: Number of nodes with available pods: 1
Sep  9 00:19:07.470: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3058, will wait for the garbage collector to delete the pods
Sep  9 00:19:07.549: INFO: Deleting DaemonSet.extensions daemon-set took: 6.731773ms
Sep  9 00:19:07.849: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.281176ms
Sep  9 00:19:13.652: INFO: Number of nodes with available pods: 0
Sep  9 00:19:13.652: INFO: Number of running nodes: 0, number of available pods: 0
Sep  9 00:19:13.657: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3058/daemonsets","resourceVersion":"319535"},"items":null}

Sep  9 00:19:13.671: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3058/pods","resourceVersion":"319535"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:19:13.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3058" for this suite.
Sep  9 00:19:19.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:19:19.806: INFO: namespace daemonsets-3058 deletion completed in 6.099886323s

• [SLOW TEST:23.637 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:19:19.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Sep  9 00:19:23.894: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-b7a3a008-7193-42e6-815d-bc5d1721622b,GenerateName:,Namespace:events-99,SelfLink:/api/v1/namespaces/events-99/pods/send-events-b7a3a008-7193-42e6-815d-bc5d1721622b,UID:e9fdc7da-7183-4299-bc9e-b228deb87197,ResourceVersion:319583,Generation:0,CreationTimestamp:2020-09-09 00:19:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 856132852,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-m2sxr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m2sxr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-m2sxr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026bce10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026bce30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:19:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:19:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:19:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:19:19 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.107,StartTime:2020-09-09 00:19:19 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-09-09 00:19:22 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://38ebe74544f8735b516c1a2de0d9c0c00d59bf0c33a1dc91b4060e0b4d09d8c5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Sep  9 00:19:25.899: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Sep  9 00:19:27.903: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:19:27.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-99" for this suite.
Sep  9 00:20:05.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:20:06.002: INFO: namespace events-99 deletion completed in 38.088410984s

• [SLOW TEST:46.196 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:20:06.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep  9 00:20:06.070: INFO: Waiting up to 5m0s for pod "pod-5b8fe58c-c765-4b50-bbe5-cb68765dc4dc" in namespace "emptydir-4940" to be "success or failure"
Sep  9 00:20:06.078: INFO: Pod "pod-5b8fe58c-c765-4b50-bbe5-cb68765dc4dc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014148ms
Sep  9 00:20:08.082: INFO: Pod "pod-5b8fe58c-c765-4b50-bbe5-cb68765dc4dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012284956s
Sep  9 00:20:10.087: INFO: Pod "pod-5b8fe58c-c765-4b50-bbe5-cb68765dc4dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016573054s
STEP: Saw pod success
Sep  9 00:20:10.087: INFO: Pod "pod-5b8fe58c-c765-4b50-bbe5-cb68765dc4dc" satisfied condition "success or failure"
Sep  9 00:20:10.090: INFO: Trying to get logs from node iruya-worker2 pod pod-5b8fe58c-c765-4b50-bbe5-cb68765dc4dc container test-container: 
STEP: delete the pod
Sep  9 00:20:10.109: INFO: Waiting for pod pod-5b8fe58c-c765-4b50-bbe5-cb68765dc4dc to disappear
Sep  9 00:20:10.113: INFO: Pod pod-5b8fe58c-c765-4b50-bbe5-cb68765dc4dc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:20:10.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4940" for this suite.
Sep  9 00:20:16.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:20:16.215: INFO: namespace emptydir-4940 deletion completed in 6.098599491s

• [SLOW TEST:10.212 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:20:16.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-0b7f7e86-c90c-4533-83d9-1ec09caef83d
STEP: Creating secret with name s-test-opt-upd-edc275a3-ff0d-4bd7-9df3-951a4274ecb1
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0b7f7e86-c90c-4533-83d9-1ec09caef83d
STEP: Updating secret s-test-opt-upd-edc275a3-ff0d-4bd7-9df3-951a4274ecb1
STEP: Creating secret with name s-test-opt-create-d96b6cbd-1eb5-4a88-9dc3-1a5ab3e80884
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:20:24.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6237" for this suite.
Sep  9 00:20:46.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:20:46.586: INFO: namespace secrets-6237 deletion completed in 22.164345351s

• [SLOW TEST:30.371 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:20:46.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1257/configmap-test-38be29e4-96d6-4805-8c13-4e405b91d3d4
STEP: Creating a pod to test consume configMaps
Sep  9 00:20:46.649: INFO: Waiting up to 5m0s for pod "pod-configmaps-fea9957d-19ea-44fe-ab80-e38abca15788" in namespace "configmap-1257" to be "success or failure"
Sep  9 00:20:46.653: INFO: Pod "pod-configmaps-fea9957d-19ea-44fe-ab80-e38abca15788": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328614ms
Sep  9 00:20:48.709: INFO: Pod "pod-configmaps-fea9957d-19ea-44fe-ab80-e38abca15788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060108492s
Sep  9 00:20:50.713: INFO: Pod "pod-configmaps-fea9957d-19ea-44fe-ab80-e38abca15788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064303265s
STEP: Saw pod success
Sep  9 00:20:50.713: INFO: Pod "pod-configmaps-fea9957d-19ea-44fe-ab80-e38abca15788" satisfied condition "success or failure"
Sep  9 00:20:50.716: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-fea9957d-19ea-44fe-ab80-e38abca15788 container env-test: 
STEP: delete the pod
Sep  9 00:20:50.925: INFO: Waiting for pod pod-configmaps-fea9957d-19ea-44fe-ab80-e38abca15788 to disappear
Sep  9 00:20:50.934: INFO: Pod pod-configmaps-fea9957d-19ea-44fe-ab80-e38abca15788 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:20:50.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1257" for this suite.
Sep  9 00:20:57.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:20:57.102: INFO: namespace configmap-1257 deletion completed in 6.117768043s

• [SLOW TEST:10.516 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:20:57.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 00:20:57.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e4b92c5-efa7-4b36-9907-ec7b6a16228f" in namespace "projected-8123" to be "success or failure"
Sep  9 00:20:57.189: INFO: Pod "downwardapi-volume-9e4b92c5-efa7-4b36-9907-ec7b6a16228f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.849161ms
Sep  9 00:20:59.193: INFO: Pod "downwardapi-volume-9e4b92c5-efa7-4b36-9907-ec7b6a16228f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05100246s
Sep  9 00:21:01.196: INFO: Pod "downwardapi-volume-9e4b92c5-efa7-4b36-9907-ec7b6a16228f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054541552s
STEP: Saw pod success
Sep  9 00:21:01.196: INFO: Pod "downwardapi-volume-9e4b92c5-efa7-4b36-9907-ec7b6a16228f" satisfied condition "success or failure"
Sep  9 00:21:01.199: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9e4b92c5-efa7-4b36-9907-ec7b6a16228f container client-container: 
STEP: delete the pod
Sep  9 00:21:01.233: INFO: Waiting for pod downwardapi-volume-9e4b92c5-efa7-4b36-9907-ec7b6a16228f to disappear
Sep  9 00:21:01.241: INFO: Pod downwardapi-volume-9e4b92c5-efa7-4b36-9907-ec7b6a16228f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:21:01.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8123" for this suite.
Sep  9 00:21:07.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:21:07.338: INFO: namespace projected-8123 deletion completed in 6.093176761s

• [SLOW TEST:10.236 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:21:07.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-565
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-565
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-565
Sep  9 00:21:07.511: INFO: Found 0 stateful pods, waiting for 1
Sep  9 00:21:17.516: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Sep  9 00:21:17.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  9 00:21:20.433: INFO: stderr: "I0909 00:21:20.294714     539 log.go:172] (0xc0005ca4d0) (0xc000778780) Create stream\nI0909 00:21:20.294763     539 log.go:172] (0xc0005ca4d0) (0xc000778780) Stream added, broadcasting: 1\nI0909 00:21:20.297669     539 log.go:172] (0xc0005ca4d0) Reply frame received for 1\nI0909 00:21:20.297724     539 log.go:172] (0xc0005ca4d0) (0xc0007e80a0) Create stream\nI0909 00:21:20.297755     539 log.go:172] (0xc0005ca4d0) (0xc0007e80a0) Stream added, broadcasting: 3\nI0909 00:21:20.299083     539 log.go:172] (0xc0005ca4d0) Reply frame received for 3\nI0909 00:21:20.299108     539 log.go:172] (0xc0005ca4d0) (0xc0007e8140) Create stream\nI0909 00:21:20.299120     539 log.go:172] (0xc0005ca4d0) (0xc0007e8140) Stream added, broadcasting: 5\nI0909 00:21:20.300223     539 log.go:172] (0xc0005ca4d0) Reply frame received for 5\nI0909 00:21:20.374674     539 log.go:172] (0xc0005ca4d0) Data frame received for 5\nI0909 00:21:20.375053     539 log.go:172] (0xc0007e8140) (5) Data frame handling\nI0909 00:21:20.375082     539 log.go:172] (0xc0007e8140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0909 00:21:20.427256     539 log.go:172] (0xc0005ca4d0) Data frame received for 3\nI0909 00:21:20.427279     539 log.go:172] (0xc0007e80a0) (3) Data frame handling\nI0909 00:21:20.427292     539 log.go:172] (0xc0007e80a0) (3) Data frame sent\nI0909 00:21:20.427477     539 log.go:172] (0xc0005ca4d0) Data frame received for 5\nI0909 00:21:20.427489     539 log.go:172] (0xc0007e8140) (5) Data frame handling\nI0909 00:21:20.427730     539 log.go:172] (0xc0005ca4d0) Data frame received for 3\nI0909 00:21:20.427766     539 log.go:172] (0xc0007e80a0) (3) Data frame handling\nI0909 00:21:20.429851     539 log.go:172] (0xc0005ca4d0) Data frame received for 1\nI0909 00:21:20.429867     539 log.go:172] (0xc000778780) (1) Data frame handling\nI0909 00:21:20.429878     539 log.go:172] (0xc000778780) (1) Data frame sent\nI0909 00:21:20.429891     539 log.go:172] (0xc0005ca4d0) (0xc000778780) Stream removed, broadcasting: 1\nI0909 00:21:20.430043     539 log.go:172] (0xc0005ca4d0) Go away received\nI0909 00:21:20.430140     539 log.go:172] (0xc0005ca4d0) (0xc000778780) Stream removed, broadcasting: 1\nI0909 00:21:20.430151     539 log.go:172] (0xc0005ca4d0) (0xc0007e80a0) Stream removed, broadcasting: 3\nI0909 00:21:20.430159     539 log.go:172] (0xc0005ca4d0) (0xc0007e8140) Stream removed, broadcasting: 5\n"
Sep  9 00:21:20.433: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  9 00:21:20.433: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  9 00:21:20.437: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Sep  9 00:21:30.441: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep  9 00:21:30.441: INFO: Waiting for statefulset status.replicas updated to 0
Sep  9 00:21:30.456: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:21:30.456: INFO: ss-0  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:21:30.456: INFO: 
Sep  9 00:21:30.456: INFO: StatefulSet ss has not reached scale 3, at 1
Sep  9 00:21:31.461: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995478154s
Sep  9 00:21:32.467: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990530126s
Sep  9 00:21:33.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985185787s
Sep  9 00:21:34.513: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.981652243s
Sep  9 00:21:35.518: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.938785797s
Sep  9 00:21:36.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.933758626s
Sep  9 00:21:37.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.928521606s
Sep  9 00:21:38.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.923302028s
Sep  9 00:21:39.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 909.178937ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-565
Sep  9 00:21:40.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:21:40.763: INFO: stderr: "I0909 00:21:40.670865     573 log.go:172] (0xc0008ba6e0) (0xc000304b40) Create stream\nI0909 00:21:40.670950     573 log.go:172] (0xc0008ba6e0) (0xc000304b40) Stream added, broadcasting: 1\nI0909 00:21:40.673222     573 log.go:172] (0xc0008ba6e0) Reply frame received for 1\nI0909 00:21:40.673252     573 log.go:172] (0xc0008ba6e0) (0xc000a1e000) Create stream\nI0909 00:21:40.673260     573 log.go:172] (0xc0008ba6e0) (0xc000a1e000) Stream added, broadcasting: 3\nI0909 00:21:40.674159     573 log.go:172] (0xc0008ba6e0) Reply frame received for 3\nI0909 00:21:40.674210     573 log.go:172] (0xc0008ba6e0) (0xc000304be0) Create stream\nI0909 00:21:40.674231     573 log.go:172] (0xc0008ba6e0) (0xc000304be0) Stream added, broadcasting: 5\nI0909 00:21:40.675283     573 log.go:172] (0xc0008ba6e0) Reply frame received for 5\nI0909 00:21:40.758289     573 log.go:172] (0xc0008ba6e0) Data frame received for 5\nI0909 00:21:40.758316     573 log.go:172] (0xc000304be0) (5) Data frame handling\nI0909 00:21:40.758324     573 log.go:172] (0xc000304be0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0909 00:21:40.758337     573 log.go:172] (0xc0008ba6e0) Data frame received for 3\nI0909 00:21:40.758341     573 log.go:172] (0xc000a1e000) (3) Data frame handling\nI0909 00:21:40.758346     573 log.go:172] (0xc000a1e000) (3) Data frame sent\nI0909 00:21:40.758511     573 log.go:172] (0xc0008ba6e0) Data frame received for 5\nI0909 00:21:40.758550     573 log.go:172] (0xc000304be0) (5) Data frame handling\nI0909 00:21:40.758570     573 log.go:172] (0xc0008ba6e0) Data frame received for 3\nI0909 00:21:40.758579     573 log.go:172] (0xc000a1e000) (3) Data frame handling\nI0909 00:21:40.759900     573 log.go:172] (0xc0008ba6e0) Data frame received for 1\nI0909 00:21:40.759923     573 log.go:172] (0xc000304b40) (1) Data frame handling\nI0909 00:21:40.759940     573 log.go:172] (0xc000304b40) (1) Data frame sent\nI0909 00:21:40.759958     573 log.go:172] (0xc0008ba6e0) (0xc000304b40) Stream removed, broadcasting: 1\nI0909 00:21:40.759979     573 log.go:172] (0xc0008ba6e0) Go away received\nI0909 00:21:40.760324     573 log.go:172] (0xc0008ba6e0) (0xc000304b40) Stream removed, broadcasting: 1\nI0909 00:21:40.760340     573 log.go:172] (0xc0008ba6e0) (0xc000a1e000) Stream removed, broadcasting: 3\nI0909 00:21:40.760347     573 log.go:172] (0xc0008ba6e0) (0xc000304be0) Stream removed, broadcasting: 5\n"
Sep  9 00:21:40.763: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  9 00:21:40.763: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  9 00:21:40.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:21:40.965: INFO: stderr: "I0909 00:21:40.895800     596 log.go:172] (0xc0009d0420) (0xc000788640) Create stream\nI0909 00:21:40.895871     596 log.go:172] (0xc0009d0420) (0xc000788640) Stream added, broadcasting: 1\nI0909 00:21:40.898629     596 log.go:172] (0xc0009d0420) Reply frame received for 1\nI0909 00:21:40.898680     596 log.go:172] (0xc0009d0420) (0xc00097e000) Create stream\nI0909 00:21:40.898697     596 log.go:172] (0xc0009d0420) (0xc00097e000) Stream added, broadcasting: 3\nI0909 00:21:40.899644     596 log.go:172] (0xc0009d0420) Reply frame received for 3\nI0909 00:21:40.899679     596 log.go:172] (0xc0009d0420) (0xc000860000) Create stream\nI0909 00:21:40.899698     596 log.go:172] (0xc0009d0420) (0xc000860000) Stream added, broadcasting: 5\nI0909 00:21:40.900589     596 log.go:172] (0xc0009d0420) Reply frame received for 5\nI0909 00:21:40.960185     596 log.go:172] (0xc0009d0420) Data frame received for 3\nI0909 00:21:40.960244     596 log.go:172] (0xc00097e000) (3) Data frame handling\nI0909 00:21:40.960266     596 log.go:172] (0xc00097e000) (3) Data frame sent\nI0909 00:21:40.960282     596 log.go:172] (0xc0009d0420) Data frame received for 3\nI0909 00:21:40.960302     596 log.go:172] (0xc00097e000) (3) Data frame handling\nI0909 00:21:40.960333     596 log.go:172] (0xc0009d0420) Data frame received for 5\nI0909 00:21:40.960350     596 log.go:172] (0xc000860000) (5) Data frame handling\nI0909 00:21:40.960367     596 log.go:172] (0xc000860000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0909 00:21:40.960384     596 log.go:172] (0xc0009d0420) Data frame received for 5\nI0909 00:21:40.960399     596 log.go:172] (0xc000860000) (5) Data frame handling\nI0909 00:21:40.961895     596 log.go:172] (0xc0009d0420) Data frame received for 1\nI0909 00:21:40.961918     596 log.go:172] (0xc000788640) (1) Data frame handling\nI0909 00:21:40.961941     596 log.go:172] (0xc000788640) (1) Data frame sent\nI0909 00:21:40.961950     596 log.go:172] (0xc0009d0420) (0xc000788640) Stream removed, broadcasting: 1\nI0909 00:21:40.962008     596 log.go:172] (0xc0009d0420) Go away received\nI0909 00:21:40.962233     596 log.go:172] (0xc0009d0420) (0xc000788640) Stream removed, broadcasting: 1\nI0909 00:21:40.962250     596 log.go:172] (0xc0009d0420) (0xc00097e000) Stream removed, broadcasting: 3\nI0909 00:21:40.962257     596 log.go:172] (0xc0009d0420) (0xc000860000) Stream removed, broadcasting: 5\n"
Sep  9 00:21:40.965: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  9 00:21:40.965: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  9 00:21:40.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:21:41.183: INFO: stderr: "I0909 00:21:41.102740     617 log.go:172] (0xc0003bc630) (0xc000322960) Create stream\nI0909 00:21:41.102800     617 log.go:172] (0xc0003bc630) (0xc000322960) Stream added, broadcasting: 1\nI0909 00:21:41.106187     617 log.go:172] (0xc0003bc630) Reply frame received for 1\nI0909 00:21:41.106231     617 log.go:172] (0xc0003bc630) (0xc000322000) Create stream\nI0909 00:21:41.106245     617 log.go:172] (0xc0003bc630) (0xc000322000) Stream added, broadcasting: 3\nI0909 00:21:41.107109     617 log.go:172] (0xc0003bc630) Reply frame received for 3\nI0909 00:21:41.107147     617 log.go:172] (0xc0003bc630) (0xc0006ce320) Create stream\nI0909 00:21:41.107163     617 log.go:172] (0xc0003bc630) (0xc0006ce320) Stream added, broadcasting: 5\nI0909 00:21:41.108090     617 log.go:172] (0xc0003bc630) Reply frame received for 5\nI0909 00:21:41.176337     617 log.go:172] (0xc0003bc630) Data frame received for 5\nI0909 00:21:41.176396     617 log.go:172] (0xc0006ce320) (5) Data frame handling\nI0909 00:21:41.176423     617 log.go:172] (0xc0006ce320) (5) Data frame sent\nI0909 00:21:41.176442     617 log.go:172] (0xc0003bc630) Data frame received for 5\nI0909 00:21:41.176456     617 log.go:172] (0xc0006ce320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0909 00:21:41.176490     617 log.go:172] (0xc0003bc630) Data frame received for 3\nI0909 00:21:41.176506     617 log.go:172] (0xc000322000) (3) Data frame handling\nI0909 00:21:41.176524     617 log.go:172] (0xc000322000) (3) Data frame sent\nI0909 00:21:41.176544     617 log.go:172] (0xc0003bc630) Data frame received for 3\nI0909 00:21:41.176570     617 log.go:172] (0xc000322000) (3) Data frame handling\nI0909 00:21:41.177968     617 log.go:172] (0xc0003bc630) Data frame received for 1\nI0909 00:21:41.178007     617 log.go:172] (0xc000322960) (1) Data frame handling\nI0909 00:21:41.178037     617 log.go:172] (0xc000322960) (1) Data frame sent\nI0909 00:21:41.178073     617 log.go:172] (0xc0003bc630) (0xc000322960) Stream removed, broadcasting: 1\nI0909 00:21:41.178095     617 log.go:172] (0xc0003bc630) Go away received\nI0909 00:21:41.178621     617 log.go:172] (0xc0003bc630) (0xc000322960) Stream removed, broadcasting: 1\nI0909 00:21:41.178652     617 log.go:172] (0xc0003bc630) (0xc000322000) Stream removed, broadcasting: 3\nI0909 00:21:41.178669     617 log.go:172] (0xc0003bc630) (0xc0006ce320) Stream removed, broadcasting: 5\n"
Sep  9 00:21:41.183: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  9 00:21:41.183: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  9 00:21:41.187: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Sep  9 00:21:41.187: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Sep  9 00:21:41.187: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Sep  9 00:21:41.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  9 00:21:41.387: INFO: stderr: "I0909 00:21:41.310051     638 log.go:172] (0xc0008ae370) (0xc0009926e0) Create stream\nI0909 00:21:41.310100     638 log.go:172] (0xc0008ae370) (0xc0009926e0) Stream added, broadcasting: 1\nI0909 00:21:41.312563     638 log.go:172] (0xc0008ae370) Reply frame received for 1\nI0909 00:21:41.312633     638 log.go:172] (0xc0008ae370) (0xc0003ac320) Create stream\nI0909 00:21:41.312694     638 log.go:172] (0xc0008ae370) (0xc0003ac320) Stream added, broadcasting: 3\nI0909 00:21:41.313561     638 log.go:172] (0xc0008ae370) Reply frame received for 3\nI0909 00:21:41.313611     638 log.go:172] (0xc0008ae370) (0xc0007ae000) Create stream\nI0909 00:21:41.313723     638 log.go:172] (0xc0008ae370) (0xc0007ae000) Stream added, broadcasting: 5\nI0909 00:21:41.314576     638 log.go:172] (0xc0008ae370) Reply frame received for 5\nI0909 00:21:41.381926     638 log.go:172] (0xc0008ae370) Data frame received for 3\nI0909 00:21:41.381975     638 log.go:172] (0xc0008ae370) Data frame received for 5\nI0909 00:21:41.382004     638 log.go:172] (0xc0007ae000) (5) Data frame handling\nI0909 00:21:41.382016     638 log.go:172] (0xc0007ae000) (5) Data frame sent\nI0909 00:21:41.382025     638 log.go:172] (0xc0008ae370) Data frame received for 5\nI0909 00:21:41.382032     638 log.go:172] (0xc0007ae000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0909 00:21:41.382054     638 log.go:172] (0xc0003ac320) (3) Data frame handling\nI0909 00:21:41.382067     638 log.go:172] (0xc0003ac320) (3) Data frame sent\nI0909 00:21:41.382079     638 log.go:172] (0xc0008ae370) Data frame received for 3\nI0909 00:21:41.382091     638 log.go:172] (0xc0003ac320) (3) Data frame handling\nI0909 00:21:41.383331     638 log.go:172] (0xc0008ae370) Data frame received for 1\nI0909 00:21:41.383351     638 log.go:172] (0xc0009926e0) (1) Data frame handling\nI0909 00:21:41.383361     638 log.go:172] (0xc0009926e0) (1) Data frame sent\nI0909 00:21:41.383374     638 log.go:172] (0xc0008ae370) (0xc0009926e0) Stream removed, broadcasting: 1\nI0909 00:21:41.383393     638 log.go:172] (0xc0008ae370) Go away received\nI0909 00:21:41.383685     638 log.go:172] (0xc0008ae370) (0xc0009926e0) Stream removed, broadcasting: 1\nI0909 00:21:41.383700     638 log.go:172] (0xc0008ae370) (0xc0003ac320) Stream removed, broadcasting: 3\nI0909 00:21:41.383706     638 log.go:172] (0xc0008ae370) (0xc0007ae000) Stream removed, broadcasting: 5\n"
Sep  9 00:21:41.387: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  9 00:21:41.387: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  9 00:21:41.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  9 00:21:41.610: INFO: stderr: "I0909 00:21:41.506042     658 log.go:172] (0xc0006e6a50) (0xc0003826e0) Create stream\nI0909 00:21:41.506100     658 log.go:172] (0xc0006e6a50) (0xc0003826e0) Stream added, broadcasting: 1\nI0909 00:21:41.509491     658 log.go:172] (0xc0006e6a50) Reply frame received for 1\nI0909 00:21:41.509658     658 log.go:172] (0xc0006e6a50) (0xc0007c6000) Create stream\nI0909 00:21:41.509728     658 log.go:172] (0xc0006e6a50) (0xc0007c6000) Stream added, broadcasting: 3\nI0909 00:21:41.511117     658 log.go:172] (0xc0006e6a50) Reply frame received for 3\nI0909 00:21:41.511154     658 log.go:172] (0xc0006e6a50) (0xc0007c60a0) Create stream\nI0909 00:21:41.511167     658 log.go:172] (0xc0006e6a50) (0xc0007c60a0) Stream added, broadcasting: 5\nI0909 00:21:41.512176     658 log.go:172] (0xc0006e6a50) Reply frame received for 5\nI0909 00:21:41.568866     658 log.go:172] (0xc0006e6a50) Data frame received for 5\nI0909 00:21:41.568896     658 log.go:172] (0xc0007c60a0) (5) Data frame handling\nI0909 00:21:41.568914     658 log.go:172] (0xc0007c60a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0909 00:21:41.604104     658 log.go:172] (0xc0006e6a50) Data frame received for 5\nI0909 00:21:41.604149     658 log.go:172] (0xc0007c60a0) (5) Data frame handling\nI0909 00:21:41.604188     658 log.go:172] (0xc0006e6a50) Data frame received for 3\nI0909 00:21:41.604216     658 log.go:172] (0xc0007c6000) (3) Data frame handling\nI0909 00:21:41.604238     658 log.go:172] (0xc0007c6000) (3) Data frame sent\nI0909 00:21:41.604262     658 log.go:172] (0xc0006e6a50) Data frame received for 3\nI0909 00:21:41.604287     658 log.go:172] (0xc0007c6000) (3) Data frame handling\nI0909 00:21:41.606797     658 log.go:172] (0xc0006e6a50) Data frame received for 1\nI0909 00:21:41.606820     658 log.go:172] (0xc0003826e0) (1) Data frame handling\nI0909 00:21:41.606834     658 log.go:172] (0xc0003826e0) (1) Data frame sent\nI0909 00:21:41.606844     658 log.go:172] (0xc0006e6a50) (0xc0003826e0) Stream removed, broadcasting: 1\nI0909 00:21:41.606852     658 log.go:172] (0xc0006e6a50) Go away received\nI0909 00:21:41.607277     658 log.go:172] (0xc0006e6a50) (0xc0003826e0) Stream removed, broadcasting: 1\nI0909 00:21:41.607304     658 log.go:172] (0xc0006e6a50) (0xc0007c6000) Stream removed, broadcasting: 3\nI0909 00:21:41.607314     658 log.go:172] (0xc0006e6a50) (0xc0007c60a0) Stream removed, broadcasting: 5\n"
Sep  9 00:21:41.611: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  9 00:21:41.611: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  9 00:21:41.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  9 00:21:41.869: INFO: stderr: "I0909 00:21:41.742050     680 log.go:172] (0xc000a32630) (0xc00062eaa0) Create stream\nI0909 00:21:41.742110     680 log.go:172] (0xc000a32630) (0xc00062eaa0) Stream added, broadcasting: 1\nI0909 00:21:41.747106     680 log.go:172] (0xc000a32630) Reply frame received for 1\nI0909 00:21:41.747169     680 log.go:172] (0xc000a32630) (0xc00062e1e0) Create stream\nI0909 00:21:41.747185     680 log.go:172] (0xc000a32630) (0xc00062e1e0) Stream added, broadcasting: 3\nI0909 00:21:41.748776     680 log.go:172] (0xc000a32630) Reply frame received for 3\nI0909 00:21:41.748802     680 log.go:172] (0xc000a32630) (0xc00062e280) Create stream\nI0909 00:21:41.748809     680 log.go:172] (0xc000a32630) (0xc00062e280) Stream added, broadcasting: 5\nI0909 00:21:41.749788     680 log.go:172] (0xc000a32630) Reply frame received for 5\nI0909 00:21:41.810569     680 log.go:172] (0xc000a32630) Data frame received for 5\nI0909 00:21:41.810594     680 log.go:172] (0xc00062e280) (5) Data frame handling\nI0909 00:21:41.810604     680 log.go:172] (0xc00062e280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0909 00:21:41.862977     680 log.go:172] (0xc000a32630) Data frame received for 3\nI0909 00:21:41.862996     680 log.go:172] (0xc00062e1e0) (3) Data frame handling\nI0909 00:21:41.863004     680 log.go:172] (0xc00062e1e0) (3) Data frame sent\nI0909 00:21:41.863010     680 log.go:172] (0xc000a32630) Data frame received for 3\nI0909 00:21:41.863015     680 log.go:172] (0xc00062e1e0) (3) Data frame handling\nI0909 00:21:41.863311     680 log.go:172] (0xc000a32630) Data frame received for 5\nI0909 00:21:41.863326     680 log.go:172] (0xc00062e280) (5) Data frame handling\nI0909 00:21:41.864983     680 log.go:172] (0xc000a32630) Data frame received for 1\nI0909 00:21:41.865004     680 log.go:172] (0xc00062eaa0) (1) Data frame handling\nI0909 00:21:41.865016     680 log.go:172] (0xc00062eaa0) (1) Data frame sent\nI0909 00:21:41.865031     680 log.go:172] (0xc000a32630) (0xc00062eaa0) Stream removed, broadcasting: 1\nI0909 00:21:41.865044     680 log.go:172] (0xc000a32630) Go away received\nI0909 00:21:41.865396     680 log.go:172] (0xc000a32630) (0xc00062eaa0) Stream removed, broadcasting: 1\nI0909 00:21:41.865411     680 log.go:172] (0xc000a32630) (0xc00062e1e0) Stream removed, broadcasting: 3\nI0909 00:21:41.865418     680 log.go:172] (0xc000a32630) (0xc00062e280) Stream removed, broadcasting: 5\n"
Sep  9 00:21:41.869: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  9 00:21:41.869: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  9 00:21:41.869: INFO: Waiting for statefulset status.replicas updated to 0
Sep  9 00:21:41.878: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Sep  9 00:21:51.901: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep  9 00:21:51.901: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Sep  9 00:21:51.901: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Sep  9 00:21:51.918: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:21:51.918: INFO: ss-0  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:21:51.918: INFO: ss-1  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:51.918: INFO: ss-2  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:51.918: INFO: 
Sep  9 00:21:51.918: INFO: StatefulSet ss has not reached scale 0, at 3
Sep  9 00:21:52.922: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:21:52.922: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:21:52.922: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:52.922: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:52.922: INFO: 
Sep  9 00:21:52.922: INFO: StatefulSet ss has not reached scale 0, at 3
Sep  9 00:21:53.969: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:21:53.969: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:21:53.969: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:53.969: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:53.969: INFO: 
Sep  9 00:21:53.969: INFO: StatefulSet ss has not reached scale 0, at 3
Sep  9 00:21:54.974: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:21:54.974: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:21:54.974: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:54.974: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:54.974: INFO: 
Sep  9 00:21:54.974: INFO: StatefulSet ss has not reached scale 0, at 3
Sep  9 00:21:55.978: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:21:55.978: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:21:55.978: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:55.978: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:55.978: INFO: 
Sep  9 00:21:55.978: INFO: StatefulSet ss has not reached scale 0, at 3
Sep  9 00:21:56.982: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:21:56.982: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:21:56.982: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:56.982: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:56.982: INFO: 
Sep  9 00:21:56.982: INFO: StatefulSet ss has not reached scale 0, at 3
Sep  9 00:21:57.986: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:21:57.987: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:21:57.987: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:57.987: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:57.987: INFO: 
Sep  9 00:21:57.987: INFO: StatefulSet ss has not reached scale 0, at 3
Sep  9 00:21:58.992: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:21:58.992: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:21:58.992: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:58.992: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:21:58.992: INFO: 
Sep  9 00:21:58.992: INFO: StatefulSet ss has not reached scale 0, at 3
Sep  9 00:22:00.002: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:22:00.002: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:22:00.002: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:22:00.002: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:22:00.002: INFO: 
Sep  9 00:22:00.002: INFO: StatefulSet ss has not reached scale 0, at 3
Sep  9 00:22:01.007: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep  9 00:22:01.007: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:07 +0000 UTC  }]
Sep  9 00:22:01.007: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:22:01.007: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:21:30 +0000 UTC  }]
Sep  9 00:22:01.007: INFO: 
Sep  9 00:22:01.007: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-565
Sep  9 00:22:02.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:22:02.173: INFO: rc: 1
Sep  9 00:22:02.173: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc003ed87e0 exit status 1   true [0xc0015ad5e8 0xc0015ad6c0 0xc0015ad7a0] [0xc0015ad5e8 0xc0015ad6c0 0xc0015ad7a0] [0xc0015ad668 0xc0015ad780] [0xba70e0 0xba70e0] 0xc002dabbc0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Sep  9 00:22:12.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:22:12.282: INFO: rc: 1
Sep  9 00:22:12.282: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d2a330 exit status 1   true [0xc00028bea0 0xc00028bed8 0xc00028bf60] [0xc00028bea0 0xc00028bed8 0xc00028bf60] [0xc00028beb8 0xc00028bf28] [0xba70e0 0xba70e0] 0xc003fc9980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:22:22.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:22:22.380: INFO: rc: 1
Sep  9 00:22:22.380: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026d5f80 exit status 1   true [0xc0017042f8 0xc001704310 0xc001704328] [0xc0017042f8 0xc001704310 0xc001704328] [0xc001704308 0xc001704320] [0xba70e0 0xba70e0] 0xc001dbd140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:22:32.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:22:32.471: INFO: rc: 1
Sep  9 00:22:32.471: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e96090 exit status 1   true [0xc001704330 0xc001704348 0xc001704360] [0xc001704330 0xc001704348 0xc001704360] [0xc001704340 0xc001704358] [0xba70e0 0xba70e0] 0xc002bb83c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:22:42.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:22:42.564: INFO: rc: 1
Sep  9 00:22:42.564: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d2a420 exit status 1   true [0xc00028bf78 0xc00028bfc0 0xc003e60028] [0xc00028bf78 0xc00028bfc0 0xc003e60028] [0xc00028bf98 0xc003e60020] [0xba70e0 0xba70e0] 0xc002286a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:22:52.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:22:52.657: INFO: rc: 1
Sep  9 00:22:52.657: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d2a4e0 exit status 1   true [0xc003e60030 0xc003e60048 0xc003e60088] [0xc003e60030 0xc003e60048 0xc003e60088] [0xc003e60040 0xc003e60068] [0xba70e0 0xba70e0] 0xc002287c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:23:02.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:23:02.759: INFO: rc: 1
Sep  9 00:23:02.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003ed88a0 exit status 1   true [0xc0015ad7c0 0xc0015ad838 0xc0015ad8f8] [0xc0015ad7c0 0xc0015ad838 0xc0015ad8f8] [0xc0015ad7f0 0xc0015ad8d0] [0xba70e0 0xba70e0] 0xc002ba8240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:23:12.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:23:12.850: INFO: rc: 1
Sep  9 00:23:12.850: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027880c0 exit status 1   true [0xc0022c2030 0xc0022c20a8 0xc0022c20e8] [0xc0022c2030 0xc0022c20a8 0xc0022c20e8] [0xc0022c20a0 0xc0022c20d8] [0xba70e0 0xba70e0] 0xc0029164e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:23:22.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:23:22.946: INFO: rc: 1
Sep  9 00:23:22.946: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dca090 exit status 1   true [0xc00028a678 0xc00028a7f0 0xc00028aa28] [0xc00028a678 0xc00028a7f0 0xc00028aa28] [0xc00028a720 0xc00028a9e8] [0xba70e0 0xba70e0] 0xc001dbcde0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:23:32.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:23:33.049: INFO: rc: 1
Sep  9 00:23:33.050: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026d40f0 exit status 1   true [0xc000742678 0xc000742938 0xc000742d68] [0xc000742678 0xc000742938 0xc000742d68] [0xc0007428f0 0xc000742b50] [0xba70e0 0xba70e0] 0xc002726600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:23:43.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:23:43.140: INFO: rc: 1
Sep  9 00:23:43.140: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dca180 exit status 1   true [0xc00028aa88 0xc00028ac10 0xc00028ae98] [0xc00028aa88 0xc00028ac10 0xc00028ae98] [0xc00028ab98 0xc00028ae08] [0xba70e0 0xba70e0] 0xc001dbdb00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:23:53.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:23:53.246: INFO: rc: 1
Sep  9 00:23:53.246: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dca240 exit status 1   true [0xc00028af40 0xc00028b320 0xc00028b458] [0xc00028af40 0xc00028b320 0xc00028b458] [0xc00028b168 0xc00028b428] [0xba70e0 0xba70e0] 0xc002daa2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:24:03.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:24:03.347: INFO: rc: 1
Sep  9 00:24:03.347: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dca330 exit status 1   true [0xc00028b560 0xc00028b7a0 0xc00028b980] [0xc00028b560 0xc00028b7a0 0xc00028b980] [0xc00028b780 0xc00028b910] [0xba70e0 0xba70e0] 0xc002daa660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:24:13.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:24:13.445: INFO: rc: 1
Sep  9 00:24:13.445: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749e60 exit status 1   true [0xc000010340 0xc003e60028 0xc003e60040] [0xc000010340 0xc003e60028 0xc003e60040] [0xc003e60020 0xc003e60038] [0xba70e0 0xba70e0] 0xc003fc85a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:24:23.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:24:23.542: INFO: rc: 1
Sep  9 00:24:23.543: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dca3f0 exit status 1   true [0xc00028b9c8 0xc00028ba68 0xc00028bba0] [0xc00028b9c8 0xc00028ba68 0xc00028bba0] [0xc00028ba58 0xc00028bad8] [0xba70e0 0xba70e0] 0xc002daaa80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:24:33.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:24:33.632: INFO: rc: 1
Sep  9 00:24:33.632: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023420f0 exit status 1   true [0xc001704000 0xc001704018 0xc001704030] [0xc001704000 0xc001704018 0xc001704030] [0xc001704010 0xc001704028] [0xba70e0 0xba70e0] 0xc0030aa2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:24:43.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:24:43.736: INFO: rc: 1
Sep  9 00:24:43.736: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dca4b0 exit status 1   true [0xc00028bbf8 0xc00028bcb0 0xc00028bdc0] [0xc00028bbf8 0xc00028bcb0 0xc00028bdc0] [0xc00028bc70 0xc00028bd68] [0xba70e0 0xba70e0] 0xc002daade0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:24:53.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:24:53.824: INFO: rc: 1
Sep  9 00:24:53.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023421e0 exit status 1   true [0xc001704038 0xc001704050 0xc001704068] [0xc001704038 0xc001704050 0xc001704068] [0xc001704048 0xc001704060] [0xba70e0 0xba70e0] 0xc0030aa600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:25:03.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:25:03.919: INFO: rc: 1
Sep  9 00:25:03.919: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dca570 exit status 1   true [0xc00028bde0 0xc00028be48 0xc00028bea8] [0xc00028bde0 0xc00028be48 0xc00028bea8] [0xc00028bdf0 0xc00028bea0] [0xba70e0 0xba70e0] 0xc002dab2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:25:13.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:25:14.011: INFO: rc: 1
Sep  9 00:25:14.011: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003ec0090 exit status 1   true [0xc0022c2118 0xc0022c21a8 0xc0022c2238] [0xc0022c2118 0xc0022c21a8 0xc0022c2238] [0xc0022c2178 0xc0022c2200] [0xba70e0 0xba70e0] 0xc0022877a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:25:24.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:25:24.106: INFO: rc: 1
Sep  9 00:25:24.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023420c0 exit status 1   true [0xc000010340 0xc001704010 0xc001704028] [0xc000010340 0xc001704010 0xc001704028] [0xc001704008 0xc001704020] [0xba70e0 0xba70e0] 0xc001dbcde0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:25:34.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:25:34.197: INFO: rc: 1
Sep  9 00:25:34.197: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003ec0150 exit status 1   true [0xc0022c2008 0xc0022c20a0 0xc0022c20d8] [0xc0022c2008 0xc0022c20a0 0xc0022c20d8] [0xc0022c2050 0xc0022c20b8] [0xba70e0 0xba70e0] 0xc0030aa180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:25:44.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:25:44.306: INFO: rc: 1
Sep  9 00:25:44.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002342240 exit status 1   true [0xc001704030 0xc001704048 0xc001704060] [0xc001704030 0xc001704048 0xc001704060] [0xc001704040 0xc001704058] [0xba70e0 0xba70e0] 0xc001dbdb00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:25:54.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:25:54.392: INFO: rc: 1
Sep  9 00:25:54.392: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002342300 exit status 1   true [0xc001704068 0xc001704088 0xc0017040a0] [0xc001704068 0xc001704088 0xc0017040a0] [0xc001704080 0xc001704098] [0xba70e0 0xba70e0] 0xc003fc81e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:26:04.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:26:04.494: INFO: rc: 1
Sep  9 00:26:04.494: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749e90 exit status 1   true [0xc003e60000 0xc003e60030 0xc003e60048] [0xc003e60000 0xc003e60030 0xc003e60048] [0xc003e60028 0xc003e60040] [0xba70e0 0xba70e0] 0xc002726600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:26:14.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:26:14.602: INFO: rc: 1
Sep  9 00:26:14.602: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003ec0210 exit status 1   true [0xc0022c20e8 0xc0022c2268 0xc0022c2298] [0xc0022c20e8 0xc0022c2268 0xc0022c2298] [0xc0022c2258 0xc0022c2288] [0xba70e0 0xba70e0] 0xc0030aa4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:26:24.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:26:24.690: INFO: rc: 1
Sep  9 00:26:24.690: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000749f50 exit status 1   true [0xc003e60050 0xc003e60090 0xc003e600b8] [0xc003e60050 0xc003e60090 0xc003e600b8] [0xc003e60088 0xc003e600b0] [0xba70e0 0xba70e0] 0xc002727560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:26:34.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:26:34.787: INFO: rc: 1
Sep  9 00:26:34.787: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026d4180 exit status 1   true [0xc000742678 0xc000742938 0xc000742d68] [0xc000742678 0xc000742938 0xc000742d68] [0xc0007428f0 0xc000742b50] [0xba70e0 0xba70e0] 0xc002daa2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:26:44.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:26:44.885: INFO: rc: 1
Sep  9 00:26:44.885: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026d4270 exit status 1   true [0xc000743220 0xc0007434d0 0xc000743a88] [0xc000743220 0xc0007434d0 0xc000743a88] [0xc000743380 0xc0007438c8] [0xba70e0 0xba70e0] 0xc002daa660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:26:54.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:26:54.974: INFO: rc: 1
Sep  9 00:26:54.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003ec02d0 exit status 1   true [0xc0022c22a8 0xc0022c22c0 0xc0022c22d8] [0xc0022c22a8 0xc0022c22c0 0xc0022c22d8] [0xc0022c22b8 0xc0022c22d0] [0xba70e0 0xba70e0] 0xc0030aa8a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Sep  9 00:27:04.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-565 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:27:05.068: INFO: rc: 1
Sep  9 00:27:05.068: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Sep  9 00:27:05.068: INFO: Scaling statefulset ss to 0
Sep  9 00:27:05.077: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep  9 00:27:05.079: INFO: Deleting all statefulset in ns statefulset-565
Sep  9 00:27:05.082: INFO: Scaling statefulset ss to 0
Sep  9 00:27:05.090: INFO: Waiting for statefulset status.replicas updated to 0
Sep  9 00:27:05.093: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:27:05.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-565" for this suite.
Sep  9 00:27:11.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:27:11.195: INFO: namespace statefulset-565 deletion completed in 6.084413903s

• [SLOW TEST:363.856 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:27:11.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1204/configmap-test-6c17d7d4-caf3-4a2c-a256-1c7267eb27c0
STEP: Creating a pod to test consume configMaps
Sep  9 00:27:11.265: INFO: Waiting up to 5m0s for pod "pod-configmaps-017815ea-4e94-41fa-94cf-972ef42cb906" in namespace "configmap-1204" to be "success or failure"
Sep  9 00:27:11.274: INFO: Pod "pod-configmaps-017815ea-4e94-41fa-94cf-972ef42cb906": Phase="Pending", Reason="", readiness=false. Elapsed: 9.879624ms
Sep  9 00:27:13.278: INFO: Pod "pod-configmaps-017815ea-4e94-41fa-94cf-972ef42cb906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013355658s
Sep  9 00:27:15.285: INFO: Pod "pod-configmaps-017815ea-4e94-41fa-94cf-972ef42cb906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020298014s
STEP: Saw pod success
Sep  9 00:27:15.285: INFO: Pod "pod-configmaps-017815ea-4e94-41fa-94cf-972ef42cb906" satisfied condition "success or failure"
Sep  9 00:27:15.288: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-017815ea-4e94-41fa-94cf-972ef42cb906 container env-test: 
STEP: delete the pod
Sep  9 00:27:15.379: INFO: Waiting for pod pod-configmaps-017815ea-4e94-41fa-94cf-972ef42cb906 to disappear
Sep  9 00:27:15.389: INFO: Pod pod-configmaps-017815ea-4e94-41fa-94cf-972ef42cb906 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:27:15.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1204" for this suite.
Sep  9 00:27:21.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:27:21.528: INFO: namespace configmap-1204 deletion completed in 6.136166711s

• [SLOW TEST:10.333 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:27:21.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 00:27:21.607: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fc1987c-123f-4ab3-a2d2-d825cf31f469" in namespace "downward-api-8954" to be "success or failure"
Sep  9 00:27:21.609: INFO: Pod "downwardapi-volume-3fc1987c-123f-4ab3-a2d2-d825cf31f469": Phase="Pending", Reason="", readiness=false. Elapsed: 2.411703ms
Sep  9 00:27:23.614: INFO: Pod "downwardapi-volume-3fc1987c-123f-4ab3-a2d2-d825cf31f469": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006621078s
Sep  9 00:27:25.618: INFO: Pod "downwardapi-volume-3fc1987c-123f-4ab3-a2d2-d825cf31f469": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010703099s
STEP: Saw pod success
Sep  9 00:27:25.618: INFO: Pod "downwardapi-volume-3fc1987c-123f-4ab3-a2d2-d825cf31f469" satisfied condition "success or failure"
Sep  9 00:27:25.620: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3fc1987c-123f-4ab3-a2d2-d825cf31f469 container client-container: 
STEP: delete the pod
Sep  9 00:27:25.687: INFO: Waiting for pod downwardapi-volume-3fc1987c-123f-4ab3-a2d2-d825cf31f469 to disappear
Sep  9 00:27:25.692: INFO: Pod downwardapi-volume-3fc1987c-123f-4ab3-a2d2-d825cf31f469 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:27:25.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8954" for this suite.
Sep  9 00:27:31.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:27:31.821: INFO: namespace downward-api-8954 deletion completed in 6.126190004s

• [SLOW TEST:10.292 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:27:31.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Sep  9 00:27:31.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Sep  9 00:27:32.082: INFO: stderr: ""
Sep  9 00:27:32.082: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:27:32.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2777" for this suite.
Sep  9 00:27:38.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:27:38.187: INFO: namespace kubectl-2777 deletion completed in 6.099470724s

• [SLOW TEST:6.365 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:27:38.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:28:38.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3141" for this suite.
Sep  9 00:29:00.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:29:00.380: INFO: namespace container-probe-3141 deletion completed in 22.095003198s

• [SLOW TEST:82.193 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:29:00.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-24e337af-1108-407e-a197-e0df1eb75959
STEP: Creating a pod to test consume secrets
Sep  9 00:29:00.536: INFO: Waiting up to 5m0s for pod "pod-secrets-e2e8c654-6a1e-4969-830f-8096927f8e36" in namespace "secrets-7270" to be "success or failure"
Sep  9 00:29:00.555: INFO: Pod "pod-secrets-e2e8c654-6a1e-4969-830f-8096927f8e36": Phase="Pending", Reason="", readiness=false. Elapsed: 19.173829ms
Sep  9 00:29:02.559: INFO: Pod "pod-secrets-e2e8c654-6a1e-4969-830f-8096927f8e36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023211283s
Sep  9 00:29:04.564: INFO: Pod "pod-secrets-e2e8c654-6a1e-4969-830f-8096927f8e36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027560296s
STEP: Saw pod success
Sep  9 00:29:04.564: INFO: Pod "pod-secrets-e2e8c654-6a1e-4969-830f-8096927f8e36" satisfied condition "success or failure"
Sep  9 00:29:04.567: INFO: Trying to get logs from node iruya-worker pod pod-secrets-e2e8c654-6a1e-4969-830f-8096927f8e36 container secret-volume-test: 
STEP: delete the pod
Sep  9 00:29:04.603: INFO: Waiting for pod pod-secrets-e2e8c654-6a1e-4969-830f-8096927f8e36 to disappear
Sep  9 00:29:04.618: INFO: Pod pod-secrets-e2e8c654-6a1e-4969-830f-8096927f8e36 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:29:04.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7270" for this suite.
Sep  9 00:29:10.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:29:10.714: INFO: namespace secrets-7270 deletion completed in 6.092375649s
STEP: Destroying namespace "secret-namespace-2689" for this suite.
Sep  9 00:29:16.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:29:16.806: INFO: namespace secret-namespace-2689 deletion completed in 6.092621144s

• [SLOW TEST:16.426 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:29:16.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Sep  9 00:29:16.859: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Sep  9 00:29:16.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5541'
Sep  9 00:29:17.201: INFO: stderr: ""
Sep  9 00:29:17.201: INFO: stdout: "service/redis-slave created\n"
Sep  9 00:29:17.201: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Sep  9 00:29:17.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5541'
Sep  9 00:29:17.473: INFO: stderr: ""
Sep  9 00:29:17.473: INFO: stdout: "service/redis-master created\n"
Sep  9 00:29:17.473: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Sep  9 00:29:17.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5541'
Sep  9 00:29:17.785: INFO: stderr: ""
Sep  9 00:29:17.785: INFO: stdout: "service/frontend created\n"
Sep  9 00:29:17.785: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Sep  9 00:29:17.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5541'
Sep  9 00:29:18.070: INFO: stderr: ""
Sep  9 00:29:18.070: INFO: stdout: "deployment.apps/frontend created\n"
Sep  9 00:29:18.071: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Sep  9 00:29:18.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5541'
Sep  9 00:29:18.410: INFO: stderr: ""
Sep  9 00:29:18.410: INFO: stdout: "deployment.apps/redis-master created\n"
Sep  9 00:29:18.410: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Sep  9 00:29:18.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5541'
Sep  9 00:29:18.688: INFO: stderr: ""
Sep  9 00:29:18.688: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Sep  9 00:29:18.688: INFO: Waiting for all frontend pods to be Running.
Sep  9 00:29:28.739: INFO: Waiting for frontend to serve content.
Sep  9 00:29:28.759: INFO: Trying to add a new entry to the guestbook.
Sep  9 00:29:28.774: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Sep  9 00:29:28.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5541'
Sep  9 00:29:28.943: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 00:29:28.943: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Sep  9 00:29:28.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5541'
Sep  9 00:29:29.177: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 00:29:29.177: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Sep  9 00:29:29.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5541'
Sep  9 00:29:29.281: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 00:29:29.281: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Sep  9 00:29:29.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5541'
Sep  9 00:29:29.371: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 00:29:29.371: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Sep  9 00:29:29.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5541'
Sep  9 00:29:29.505: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 00:29:29.505: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Sep  9 00:29:29.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5541'
Sep  9 00:29:29.715: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 00:29:29.715: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:29:29.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5541" for this suite.
Sep  9 00:30:16.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:30:16.461: INFO: namespace kubectl-5541 deletion completed in 46.346264589s

• [SLOW TEST:59.655 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:30:16.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:30:16.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:30:20.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7743" for this suite.
Sep  9 00:30:58.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:30:58.662: INFO: namespace pods-7743 deletion completed in 38.086953608s

• [SLOW TEST:42.199 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:30:58.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:31:02.940: INFO: Waiting up to 5m0s for pod "client-envvars-18f2bf26-8813-4fce-b89c-8a5bcc80e4dd" in namespace "pods-4047" to be "success or failure"
Sep  9 00:31:02.944: INFO: Pod "client-envvars-18f2bf26-8813-4fce-b89c-8a5bcc80e4dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.275613ms
Sep  9 00:31:05.042: INFO: Pod "client-envvars-18f2bf26-8813-4fce-b89c-8a5bcc80e4dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101653988s
Sep  9 00:31:07.046: INFO: Pod "client-envvars-18f2bf26-8813-4fce-b89c-8a5bcc80e4dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105768744s
STEP: Saw pod success
Sep  9 00:31:07.046: INFO: Pod "client-envvars-18f2bf26-8813-4fce-b89c-8a5bcc80e4dd" satisfied condition "success or failure"
Sep  9 00:31:07.049: INFO: Trying to get logs from node iruya-worker pod client-envvars-18f2bf26-8813-4fce-b89c-8a5bcc80e4dd container env3cont: 
STEP: delete the pod
Sep  9 00:31:07.086: INFO: Waiting for pod client-envvars-18f2bf26-8813-4fce-b89c-8a5bcc80e4dd to disappear
Sep  9 00:31:07.100: INFO: Pod client-envvars-18f2bf26-8813-4fce-b89c-8a5bcc80e4dd no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:31:07.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4047" for this suite.
Sep  9 00:31:57.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:31:57.220: INFO: namespace pods-4047 deletion completed in 50.117019096s

• [SLOW TEST:58.558 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:31:57.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-fcbacbc6-11b4-4217-a28b-752b24fff243 in namespace container-probe-3957
Sep  9 00:32:01.297: INFO: Started pod test-webserver-fcbacbc6-11b4-4217-a28b-752b24fff243 in namespace container-probe-3957
STEP: checking the pod's current state and verifying that restartCount is present
Sep  9 00:32:01.299: INFO: Initial restart count of pod test-webserver-fcbacbc6-11b4-4217-a28b-752b24fff243 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:36:02.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3957" for this suite.
Sep  9 00:36:08.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:36:08.204: INFO: namespace container-probe-3957 deletion completed in 6.157977193s

• [SLOW TEST:250.984 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:36:08.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:36:08.264: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Sep  9 00:36:10.393: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:36:11.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6360" for this suite.
Sep  9 00:36:17.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:36:17.566: INFO: namespace replication-controller-6360 deletion completed in 6.09161686s

• [SLOW TEST:9.362 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:36:17.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:36:48.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5661" for this suite.
Sep  9 00:36:54.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:36:54.461: INFO: namespace container-runtime-5661 deletion completed in 6.129478403s

• [SLOW TEST:36.894 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:36:54.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:36:54.550: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Sep  9 00:37:00.814: INFO: namespace kubectl-4871
Sep  9 00:37:00.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4871'
Sep  9 00:37:03.610: INFO: stderr: ""
Sep  9 00:37:03.610: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep  9 00:37:04.663: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:37:04.663: INFO: Found 0 / 1
Sep  9 00:37:05.615: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:37:05.616: INFO: Found 0 / 1
Sep  9 00:37:06.615: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:37:06.615: INFO: Found 0 / 1
Sep  9 00:37:07.615: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:37:07.616: INFO: Found 1 / 1
Sep  9 00:37:07.616: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep  9 00:37:07.619: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:37:07.619: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep  9 00:37:07.619: INFO: wait on redis-master startup in kubectl-4871 
Sep  9 00:37:07.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vbkb8 redis-master --namespace=kubectl-4871'
Sep  9 00:37:07.734: INFO: stderr: ""
Sep  9 00:37:07.734: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Sep 00:37:06.412 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Sep 00:37:06.412 # Server started, Redis version 3.2.12\n1:M 09 Sep 00:37:06.412 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Sep 00:37:06.412 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Sep  9 00:37:07.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4871'
Sep  9 00:37:07.871: INFO: stderr: ""
Sep  9 00:37:07.871: INFO: stdout: "service/rm2 exposed\n"
Sep  9 00:37:07.879: INFO: Service rm2 in namespace kubectl-4871 found.
STEP: exposing service
Sep  9 00:37:09.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4871'
Sep  9 00:37:10.015: INFO: stderr: ""
Sep  9 00:37:10.015: INFO: stdout: "service/rm3 exposed\n"
Sep  9 00:37:10.041: INFO: Service rm3 in namespace kubectl-4871 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:37:12.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4871" for this suite.
Sep  9 00:37:34.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:37:34.161: INFO: namespace kubectl-4871 deletion completed in 22.105406549s

• [SLOW TEST:33.410 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:37:34.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep  9 00:37:38.278: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:37:38.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6018" for this suite.
Sep  9 00:37:44.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:37:44.645: INFO: namespace container-runtime-6018 deletion completed in 6.214594104s

• [SLOW TEST:10.482 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:37:44.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Sep  9 00:37:44.756: INFO: Waiting up to 5m0s for pod "downward-api-19fe0ae1-bdaa-4fef-876c-bd6da72b3971" in namespace "downward-api-2505" to be "success or failure"
Sep  9 00:37:44.764: INFO: Pod "downward-api-19fe0ae1-bdaa-4fef-876c-bd6da72b3971": Phase="Pending", Reason="", readiness=false. Elapsed: 8.401972ms
Sep  9 00:37:46.769: INFO: Pod "downward-api-19fe0ae1-bdaa-4fef-876c-bd6da72b3971": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012806493s
Sep  9 00:37:48.773: INFO: Pod "downward-api-19fe0ae1-bdaa-4fef-876c-bd6da72b3971": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017044577s
STEP: Saw pod success
Sep  9 00:37:48.773: INFO: Pod "downward-api-19fe0ae1-bdaa-4fef-876c-bd6da72b3971" satisfied condition "success or failure"
Sep  9 00:37:48.776: INFO: Trying to get logs from node iruya-worker pod downward-api-19fe0ae1-bdaa-4fef-876c-bd6da72b3971 container dapi-container: 
STEP: delete the pod
Sep  9 00:37:48.839: INFO: Waiting for pod downward-api-19fe0ae1-bdaa-4fef-876c-bd6da72b3971 to disappear
Sep  9 00:37:48.857: INFO: Pod downward-api-19fe0ae1-bdaa-4fef-876c-bd6da72b3971 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:37:48.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2505" for this suite.
Sep  9 00:37:54.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:37:54.966: INFO: namespace downward-api-2505 deletion completed in 6.105267628s

• [SLOW TEST:10.320 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:37:54.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Sep  9 00:37:55.803: INFO: Pod name wrapped-volume-race-ef99f667-3404-418d-8a1e-291aefb872a0: Found 0 pods out of 5
Sep  9 00:38:00.811: INFO: Pod name wrapped-volume-race-ef99f667-3404-418d-8a1e-291aefb872a0: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ef99f667-3404-418d-8a1e-291aefb872a0 in namespace emptydir-wrapper-7048, will wait for the garbage collector to delete the pods
Sep  9 00:38:14.891: INFO: Deleting ReplicationController wrapped-volume-race-ef99f667-3404-418d-8a1e-291aefb872a0 took: 7.235877ms
Sep  9 00:38:15.191: INFO: Terminating ReplicationController wrapped-volume-race-ef99f667-3404-418d-8a1e-291aefb872a0 pods took: 300.255423ms
STEP: Creating RC which spawns configmap-volume pods
Sep  9 00:38:54.143: INFO: Pod name wrapped-volume-race-5ace79b0-b7e8-463f-bc61-5e2d30ffe413: Found 0 pods out of 5
Sep  9 00:38:59.151: INFO: Pod name wrapped-volume-race-5ace79b0-b7e8-463f-bc61-5e2d30ffe413: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5ace79b0-b7e8-463f-bc61-5e2d30ffe413 in namespace emptydir-wrapper-7048, will wait for the garbage collector to delete the pods
Sep  9 00:39:15.319: INFO: Deleting ReplicationController wrapped-volume-race-5ace79b0-b7e8-463f-bc61-5e2d30ffe413 took: 7.352291ms
Sep  9 00:39:15.619: INFO: Terminating ReplicationController wrapped-volume-race-5ace79b0-b7e8-463f-bc61-5e2d30ffe413 pods took: 300.292189ms
STEP: Creating RC which spawns configmap-volume pods
Sep  9 00:39:52.162: INFO: Pod name wrapped-volume-race-f1570f09-90cf-49ae-90d9-a37a63285dd8: Found 0 pods out of 5
Sep  9 00:39:57.169: INFO: Pod name wrapped-volume-race-f1570f09-90cf-49ae-90d9-a37a63285dd8: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f1570f09-90cf-49ae-90d9-a37a63285dd8 in namespace emptydir-wrapper-7048, will wait for the garbage collector to delete the pods
Sep  9 00:40:13.262: INFO: Deleting ReplicationController wrapped-volume-race-f1570f09-90cf-49ae-90d9-a37a63285dd8 took: 11.089379ms
Sep  9 00:40:13.662: INFO: Terminating ReplicationController wrapped-volume-race-f1570f09-90cf-49ae-90d9-a37a63285dd8 pods took: 400.34093ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:40:55.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7048" for this suite.
Sep  9 00:41:03.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:41:03.316: INFO: namespace emptydir-wrapper-7048 deletion completed in 8.09027211s

• [SLOW TEST:188.351 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:41:03.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:41:03.393: INFO: Pod name rollover-pod: Found 0 pods out of 1
Sep  9 00:41:08.398: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep  9 00:41:08.398: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Sep  9 00:41:10.402: INFO: Creating deployment "test-rollover-deployment"
Sep  9 00:41:10.451: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Sep  9 00:41:12.458: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Sep  9 00:41:12.465: INFO: Ensure that both replica sets have 1 created replica
Sep  9 00:41:12.471: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Sep  9 00:41:12.478: INFO: Updating deployment test-rollover-deployment
Sep  9 00:41:12.478: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Sep  9 00:41:14.507: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Sep  9 00:41:14.513: INFO: Make sure deployment "test-rollover-deployment" is complete
Sep  9 00:41:14.519: INFO: all replica sets need to contain the pod-template-hash label
Sep  9 00:41:14.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208872, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 00:41:16.527: INFO: all replica sets need to contain the pod-template-hash label
Sep  9 00:41:16.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208875, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 00:41:18.534: INFO: all replica sets need to contain the pod-template-hash label
Sep  9 00:41:18.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208875, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 00:41:20.527: INFO: all replica sets need to contain the pod-template-hash label
Sep  9 00:41:20.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208875, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 00:41:22.528: INFO: all replica sets need to contain the pod-template-hash label
Sep  9 00:41:22.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208875, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 00:41:24.527: INFO: all replica sets need to contain the pod-template-hash label
Sep  9 00:41:24.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208875, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208870, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 00:41:26.528: INFO: 
Sep  9 00:41:26.528: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep  9 00:41:26.536: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3851,SelfLink:/apis/apps/v1/namespaces/deployment-3851/deployments/test-rollover-deployment,UID:682dc5e3-647b-4e40-823c-d0d436e049e7,ResourceVersion:324006,Generation:2,CreationTimestamp:2020-09-09 00:41:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-09 00:41:10 +0000 UTC 2020-09-09 00:41:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-09 00:41:25 +0000 UTC 2020-09-09 00:41:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Sep  9 00:41:26.540: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3851,SelfLink:/apis/apps/v1/namespaces/deployment-3851/replicasets/test-rollover-deployment-854595fc44,UID:6f3d5b09-2629-43c3-8534-62736dc8844d,ResourceVersion:323996,Generation:2,CreationTimestamp:2020-09-09 00:41:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 682dc5e3-647b-4e40-823c-d0d436e049e7 0xc0028eec57 0xc0028eec58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep  9 00:41:26.540: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Sep  9 00:41:26.540: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3851,SelfLink:/apis/apps/v1/namespaces/deployment-3851/replicasets/test-rollover-controller,UID:f0de2d10-0b54-4639-8486-c950d296ab1d,ResourceVersion:324005,Generation:2,CreationTimestamp:2020-09-09 00:41:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 682dc5e3-647b-4e40-823c-d0d436e049e7 0xc000a63fe7 0xc000a63fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  9 00:41:26.540: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3851,SelfLink:/apis/apps/v1/namespaces/deployment-3851/replicasets/test-rollover-deployment-9b8b997cf,UID:af6c5fe3-95c8-4f33-9fb0-f7dd3613b8a1,ResourceVersion:323963,Generation:2,CreationTimestamp:2020-09-09 00:41:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 682dc5e3-647b-4e40-823c-d0d436e049e7 0xc0028eed20 0xc0028eed21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  9 00:41:26.544: INFO: Pod "test-rollover-deployment-854595fc44-qx5cw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-qx5cw,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3851,SelfLink:/api/v1/namespaces/deployment-3851/pods/test-rollover-deployment-854595fc44-qx5cw,UID:5b26f293-3f3b-4861-9c12-395d23ca3a22,ResourceVersion:323974,Generation:0,CreationTimestamp:2020-09-09 00:41:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 6f3d5b09-2629-43c3-8534-62736dc8844d 0xc0005ce347 0xc0005ce348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ndg4k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ndg4k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-ndg4k true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0005ce420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0005ce470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:41:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:41:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:41:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:41:12 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.139,StartTime:2020-09-09 00:41:12 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-09 00:41:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://26ba909561d5e5c7798731bb326bbd29aa6d7bad946222fffcaaef14ac9d9b36}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:41:26.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3851" for this suite.
Sep  9 00:41:32.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:41:32.715: INFO: namespace deployment-3851 deletion completed in 6.167133781s

• [SLOW TEST:29.398 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:41:32.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep  9 00:41:36.837: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:41:37.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8205" for this suite.
Sep  9 00:41:43.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:41:43.130: INFO: namespace container-runtime-8205 deletion completed in 6.096068252s

• [SLOW TEST:10.414 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:41:43.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:41:43.209: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Sep  9 00:41:43.219: INFO: Pod name sample-pod: Found 0 pods out of 1
Sep  9 00:41:48.224: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep  9 00:41:48.224: INFO: Creating deployment "test-rolling-update-deployment"
Sep  9 00:41:48.229: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Sep  9 00:41:48.240: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Sep  9 00:41:50.246: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Sep  9 00:41:50.249: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208908, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208908, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208908, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735208908, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 00:41:52.253: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep  9 00:41:52.261: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-5310,SelfLink:/apis/apps/v1/namespaces/deployment-5310/deployments/test-rolling-update-deployment,UID:20d75b96-6caa-4e24-9f89-b063023c87d6,ResourceVersion:324165,Generation:1,CreationTimestamp:2020-09-09 00:41:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-09 00:41:48 +0000 UTC 2020-09-09 00:41:48 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-09 00:41:51 +0000 UTC 2020-09-09 00:41:48 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Sep  9 00:41:52.263: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-5310,SelfLink:/apis/apps/v1/namespaces/deployment-5310/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:cd8e8dfd-0b25-4280-97ef-d2172d661bdc,ResourceVersion:324154,Generation:1,CreationTimestamp:2020-09-09 00:41:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 20d75b96-6caa-4e24-9f89-b063023c87d6 0xc003a16ba7 0xc003a16ba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep  9 00:41:52.263: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Sep  9 00:41:52.263: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-5310,SelfLink:/apis/apps/v1/namespaces/deployment-5310/replicasets/test-rolling-update-controller,UID:a9851b11-25a5-4a41-9146-0b763814187b,ResourceVersion:324163,Generation:2,CreationTimestamp:2020-09-09 00:41:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 20d75b96-6caa-4e24-9f89-b063023c87d6 0xc003a16ad7 0xc003a16ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  9 00:41:52.267: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-xsmgm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-xsmgm,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-5310,SelfLink:/api/v1/namespaces/deployment-5310/pods/test-rolling-update-deployment-79f6b9d75c-xsmgm,UID:795f7584-8324-4f12-b45f-f9815bf0dbd9,ResourceVersion:324153,Generation:0,CreationTimestamp:2020-09-09 00:41:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c cd8e8dfd-0b25-4280-97ef-d2172d661bdc 0xc0026bdb57 0xc0026bdb58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2dmns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2dmns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-2dmns true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026bdbd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026bdbf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:41:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:41:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:41:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:41:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.70,StartTime:2020-09-09 00:41:48 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-09 00:41:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://46fcdcdfde86509feca1b097ccfdea2a7298d562a01b25da6b4f8be1ef791834}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:41:52.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5310" for this suite.
Sep  9 00:41:58.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:41:58.450: INFO: namespace deployment-5310 deletion completed in 6.180019335s

• [SLOW TEST:15.320 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:41:58.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep  9 00:42:02.602: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:42:02.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-682" for this suite.
Sep  9 00:42:08.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:42:08.733: INFO: namespace container-runtime-682 deletion completed in 6.115003377s

• [SLOW TEST:10.283 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:42:08.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep  9 00:42:08.826: INFO: Waiting up to 5m0s for pod "pod-2d8b02ea-89c6-4fec-931f-485e015fabb2" in namespace "emptydir-2474" to be "success or failure"
Sep  9 00:42:08.830: INFO: Pod "pod-2d8b02ea-89c6-4fec-931f-485e015fabb2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.566887ms
Sep  9 00:42:10.931: INFO: Pod "pod-2d8b02ea-89c6-4fec-931f-485e015fabb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105148393s
Sep  9 00:42:12.949: INFO: Pod "pod-2d8b02ea-89c6-4fec-931f-485e015fabb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123139471s
STEP: Saw pod success
Sep  9 00:42:12.950: INFO: Pod "pod-2d8b02ea-89c6-4fec-931f-485e015fabb2" satisfied condition "success or failure"
Sep  9 00:42:12.952: INFO: Trying to get logs from node iruya-worker pod pod-2d8b02ea-89c6-4fec-931f-485e015fabb2 container test-container: 
STEP: delete the pod
Sep  9 00:42:13.006: INFO: Waiting for pod pod-2d8b02ea-89c6-4fec-931f-485e015fabb2 to disappear
Sep  9 00:42:13.040: INFO: Pod pod-2d8b02ea-89c6-4fec-931f-485e015fabb2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:42:13.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2474" for this suite.
Sep  9 00:42:19.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:42:19.175: INFO: namespace emptydir-2474 deletion completed in 6.13124481s

• [SLOW TEST:10.442 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:42:19.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep  9 00:42:27.451: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:27.502: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  9 00:42:29.502: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:29.512: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  9 00:42:31.502: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:31.507: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  9 00:42:33.502: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:33.506: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  9 00:42:35.502: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:35.506: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  9 00:42:37.502: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:37.506: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  9 00:42:39.502: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:39.506: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  9 00:42:41.502: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:41.506: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  9 00:42:43.502: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:43.505: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  9 00:42:45.502: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:45.515: INFO: Pod pod-with-poststart-exec-hook still exists
Sep  9 00:42:47.502: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep  9 00:42:47.509: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:42:47.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1809" for this suite.
Sep  9 00:43:11.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:43:11.612: INFO: namespace container-lifecycle-hook-1809 deletion completed in 24.098992858s

• [SLOW TEST:52.436 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:43:11.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 00:43:11.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a46cbde2-7447-46d0-a835-82642c48c8fe" in namespace "downward-api-5073" to be "success or failure"
Sep  9 00:43:11.700: INFO: Pod "downwardapi-volume-a46cbde2-7447-46d0-a835-82642c48c8fe": Phase="Pending", Reason="", readiness=false. Elapsed: 5.534078ms
Sep  9 00:43:13.724: INFO: Pod "downwardapi-volume-a46cbde2-7447-46d0-a835-82642c48c8fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029935555s
Sep  9 00:43:15.727: INFO: Pod "downwardapi-volume-a46cbde2-7447-46d0-a835-82642c48c8fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032996619s
STEP: Saw pod success
Sep  9 00:43:15.727: INFO: Pod "downwardapi-volume-a46cbde2-7447-46d0-a835-82642c48c8fe" satisfied condition "success or failure"
Sep  9 00:43:15.729: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a46cbde2-7447-46d0-a835-82642c48c8fe container client-container: 
STEP: delete the pod
Sep  9 00:43:15.756: INFO: Waiting for pod downwardapi-volume-a46cbde2-7447-46d0-a835-82642c48c8fe to disappear
Sep  9 00:43:15.760: INFO: Pod downwardapi-volume-a46cbde2-7447-46d0-a835-82642c48c8fe no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:43:15.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5073" for this suite.
Sep  9 00:43:21.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:43:21.881: INFO: namespace downward-api-5073 deletion completed in 6.118069763s

• [SLOW TEST:10.269 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:43:21.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1986
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1986
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1986
Sep  9 00:43:21.982: INFO: Found 0 stateful pods, waiting for 1
Sep  9 00:43:31.986: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Sep  9 00:43:31.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1986 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  9 00:43:32.355: INFO: stderr: "I0909 00:43:32.211349    1695 log.go:172] (0xc000aba420) (0xc000720780) Create stream\nI0909 00:43:32.211392    1695 log.go:172] (0xc000aba420) (0xc000720780) Stream added, broadcasting: 1\nI0909 00:43:32.215102    1695 log.go:172] (0xc000aba420) Reply frame received for 1\nI0909 00:43:32.215153    1695 log.go:172] (0xc000aba420) (0xc000720000) Create stream\nI0909 00:43:32.215172    1695 log.go:172] (0xc000aba420) (0xc000720000) Stream added, broadcasting: 3\nI0909 00:43:32.216534    1695 log.go:172] (0xc000aba420) Reply frame received for 3\nI0909 00:43:32.216578    1695 log.go:172] (0xc000aba420) (0xc00060c1e0) Create stream\nI0909 00:43:32.216619    1695 log.go:172] (0xc000aba420) (0xc00060c1e0) Stream added, broadcasting: 5\nI0909 00:43:32.217556    1695 log.go:172] (0xc000aba420) Reply frame received for 5\nI0909 00:43:32.296659    1695 log.go:172] (0xc000aba420) Data frame received for 5\nI0909 00:43:32.296694    1695 log.go:172] (0xc00060c1e0) (5) Data frame handling\nI0909 00:43:32.296714    1695 log.go:172] (0xc00060c1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0909 00:43:32.349200    1695 log.go:172] (0xc000aba420) Data frame received for 5\nI0909 00:43:32.349236    1695 log.go:172] (0xc00060c1e0) (5) Data frame handling\nI0909 00:43:32.349261    1695 log.go:172] (0xc000aba420) Data frame received for 3\nI0909 00:43:32.349274    1695 log.go:172] (0xc000720000) (3) Data frame handling\nI0909 00:43:32.349287    1695 log.go:172] (0xc000720000) (3) Data frame sent\nI0909 00:43:32.349301    1695 log.go:172] (0xc000aba420) Data frame received for 3\nI0909 00:43:32.349311    1695 log.go:172] (0xc000720000) (3) Data frame handling\nI0909 00:43:32.351207    1695 log.go:172] (0xc000aba420) Data frame received for 1\nI0909 00:43:32.351224    1695 log.go:172] (0xc000720780) (1) Data frame handling\nI0909 00:43:32.351237    1695 log.go:172] (0xc000720780) (1) Data frame sent\nI0909 00:43:32.351245    1695 log.go:172] (0xc000aba420) (0xc000720780) Stream removed, broadcasting: 1\nI0909 00:43:32.351347    1695 log.go:172] (0xc000aba420) Go away received\nI0909 00:43:32.351464    1695 log.go:172] (0xc000aba420) (0xc000720780) Stream removed, broadcasting: 1\nI0909 00:43:32.351479    1695 log.go:172] (0xc000aba420) (0xc000720000) Stream removed, broadcasting: 3\nI0909 00:43:32.351484    1695 log.go:172] (0xc000aba420) (0xc00060c1e0) Stream removed, broadcasting: 5\n"
Sep  9 00:43:32.355: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  9 00:43:32.356: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  9 00:43:32.359: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Sep  9 00:43:42.364: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep  9 00:43:42.364: INFO: Waiting for statefulset status.replicas updated to 0
Sep  9 00:43:42.378: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999422s
Sep  9 00:43:43.382: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995380824s
Sep  9 00:43:44.387: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990818174s
Sep  9 00:43:45.392: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985841261s
Sep  9 00:43:46.397: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.981182707s
Sep  9 00:43:47.401: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.9761001s
Sep  9 00:43:48.406: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.972061222s
Sep  9 00:43:49.411: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.967085035s
Sep  9 00:43:50.416: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.962398884s
Sep  9 00:43:51.419: INFO: Verifying statefulset ss doesn't scale past 1 for another 957.529343ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1986
Sep  9 00:43:52.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1986 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:43:52.634: INFO: stderr: "I0909 00:43:52.553062    1718 log.go:172] (0xc0009e8370) (0xc0007c2640) Create stream\nI0909 00:43:52.553144    1718 log.go:172] (0xc0009e8370) (0xc0007c2640) Stream added, broadcasting: 1\nI0909 00:43:52.557358    1718 log.go:172] (0xc0009e8370) Reply frame received for 1\nI0909 00:43:52.557417    1718 log.go:172] (0xc0009e8370) (0xc0008aa000) Create stream\nI0909 00:43:52.557432    1718 log.go:172] (0xc0009e8370) (0xc0008aa000) Stream added, broadcasting: 3\nI0909 00:43:52.558786    1718 log.go:172] (0xc0009e8370) Reply frame received for 3\nI0909 00:43:52.558841    1718 log.go:172] (0xc0009e8370) (0xc0007c26e0) Create stream\nI0909 00:43:52.558855    1718 log.go:172] (0xc0009e8370) (0xc0007c26e0) Stream added, broadcasting: 5\nI0909 00:43:52.559859    1718 log.go:172] (0xc0009e8370) Reply frame received for 5\nI0909 00:43:52.629174    1718 log.go:172] (0xc0009e8370) Data frame received for 5\nI0909 00:43:52.629211    1718 log.go:172] (0xc0007c26e0) (5) Data frame handling\nI0909 00:43:52.629225    1718 log.go:172] (0xc0007c26e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0909 00:43:52.629578    1718 log.go:172] (0xc0009e8370) Data frame received for 5\nI0909 00:43:52.629614    1718 log.go:172] (0xc0007c26e0) (5) Data frame handling\nI0909 00:43:52.629648    1718 log.go:172] (0xc0009e8370) Data frame received for 3\nI0909 00:43:52.629661    1718 log.go:172] (0xc0008aa000) (3) Data frame handling\nI0909 00:43:52.629685    1718 log.go:172] (0xc0008aa000) (3) Data frame sent\nI0909 00:43:52.629698    1718 log.go:172] (0xc0009e8370) Data frame received for 3\nI0909 00:43:52.629706    1718 log.go:172] (0xc0008aa000) (3) Data frame handling\nI0909 00:43:52.630879    1718 log.go:172] (0xc0009e8370) Data frame received for 1\nI0909 00:43:52.630906    1718 log.go:172] (0xc0007c2640) (1) Data frame handling\nI0909 00:43:52.630919    1718 log.go:172] (0xc0007c2640) (1) Data frame sent\nI0909 00:43:52.630932    1718 log.go:172] (0xc0009e8370) (0xc0007c2640) Stream removed, broadcasting: 1\nI0909 00:43:52.631003    1718 log.go:172] (0xc0009e8370) Go away received\nI0909 00:43:52.631273    1718 log.go:172] (0xc0009e8370) (0xc0007c2640) Stream removed, broadcasting: 1\nI0909 00:43:52.631296    1718 log.go:172] (0xc0009e8370) (0xc0008aa000) Stream removed, broadcasting: 3\nI0909 00:43:52.631306    1718 log.go:172] (0xc0009e8370) (0xc0007c26e0) Stream removed, broadcasting: 5\n"
Sep  9 00:43:52.634: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  9 00:43:52.634: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  9 00:43:52.641: INFO: Found 1 stateful pods, waiting for 3
Sep  9 00:44:02.645: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Sep  9 00:44:02.645: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Sep  9 00:44:02.645: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Sep  9 00:44:02.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1986 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  9 00:44:02.829: INFO: stderr: "I0909 00:44:02.770768    1738 log.go:172] (0xc00092c420) (0xc0004ee6e0) Create stream\nI0909 00:44:02.770819    1738 log.go:172] (0xc00092c420) (0xc0004ee6e0) Stream added, broadcasting: 1\nI0909 00:44:02.774152    1738 log.go:172] (0xc00092c420) Reply frame received for 1\nI0909 00:44:02.774178    1738 log.go:172] (0xc00092c420) (0xc0004ee000) Create stream\nI0909 00:44:02.774186    1738 log.go:172] (0xc00092c420) (0xc0004ee000) Stream added, broadcasting: 3\nI0909 00:44:02.775345    1738 log.go:172] (0xc00092c420) Reply frame received for 3\nI0909 00:44:02.775384    1738 log.go:172] (0xc00092c420) (0xc0004e2000) Create stream\nI0909 00:44:02.775403    1738 log.go:172] (0xc00092c420) (0xc0004e2000) Stream added, broadcasting: 5\nI0909 00:44:02.776534    1738 log.go:172] (0xc00092c420) Reply frame received for 5\nI0909 00:44:02.822333    1738 log.go:172] (0xc00092c420) Data frame received for 3\nI0909 00:44:02.822364    1738 log.go:172] (0xc0004ee000) (3) Data frame handling\nI0909 00:44:02.822377    1738 log.go:172] (0xc0004ee000) (3) Data frame sent\nI0909 00:44:02.822386    1738 log.go:172] (0xc00092c420) Data frame received for 3\nI0909 00:44:02.822397    1738 log.go:172] (0xc0004ee000) (3) Data frame handling\nI0909 00:44:02.822454    1738 log.go:172] (0xc00092c420) Data frame received for 5\nI0909 00:44:02.822476    1738 log.go:172] (0xc0004e2000) (5) Data frame handling\nI0909 00:44:02.822505    1738 log.go:172] (0xc0004e2000) (5) Data frame sent\nI0909 00:44:02.822522    1738 log.go:172] (0xc00092c420) Data frame received for 5\nI0909 00:44:02.822529    1738 log.go:172] (0xc0004e2000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0909 00:44:02.823897    1738 log.go:172] (0xc00092c420) Data frame received for 1\nI0909 00:44:02.823916    1738 log.go:172] (0xc0004ee6e0) (1) Data frame handling\nI0909 00:44:02.823924    1738 log.go:172] (0xc0004ee6e0) (1) Data frame sent\nI0909 00:44:02.823933    1738 log.go:172] (0xc00092c420) (0xc0004ee6e0) Stream removed, broadcasting: 1\nI0909 00:44:02.824256    1738 log.go:172] (0xc00092c420) (0xc0004ee6e0) Stream removed, broadcasting: 1\nI0909 00:44:02.824274    1738 log.go:172] (0xc00092c420) (0xc0004ee000) Stream removed, broadcasting: 3\nI0909 00:44:02.824281    1738 log.go:172] (0xc00092c420) (0xc0004e2000) Stream removed, broadcasting: 5\n"
Sep  9 00:44:02.829: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  9 00:44:02.829: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  9 00:44:02.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1986 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  9 00:44:03.083: INFO: stderr: "I0909 00:44:02.951096    1758 log.go:172] (0xc000a14370) (0xc00089a640) Create stream\nI0909 00:44:02.951160    1758 log.go:172] (0xc000a14370) (0xc00089a640) Stream added, broadcasting: 1\nI0909 00:44:02.958887    1758 log.go:172] (0xc000a14370) Reply frame received for 1\nI0909 00:44:02.958942    1758 log.go:172] (0xc000a14370) (0xc0008b2000) Create stream\nI0909 00:44:02.958956    1758 log.go:172] (0xc000a14370) (0xc0008b2000) Stream added, broadcasting: 3\nI0909 00:44:02.960820    1758 log.go:172] (0xc000a14370) Reply frame received for 3\nI0909 00:44:02.960867    1758 log.go:172] (0xc000a14370) (0xc00089a6e0) Create stream\nI0909 00:44:02.960878    1758 log.go:172] (0xc000a14370) (0xc00089a6e0) Stream added, broadcasting: 5\nI0909 00:44:02.962638    1758 log.go:172] (0xc000a14370) Reply frame received for 5\nI0909 00:44:03.032201    1758 log.go:172] (0xc000a14370) Data frame received for 5\nI0909 00:44:03.032231    1758 log.go:172] (0xc00089a6e0) (5) Data frame handling\nI0909 00:44:03.032253    1758 log.go:172] (0xc00089a6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0909 00:44:03.076112    1758 log.go:172] (0xc000a14370) Data frame received for 3\nI0909 00:44:03.076159    1758 log.go:172] (0xc0008b2000) (3) Data frame handling\nI0909 00:44:03.076185    1758 log.go:172] (0xc000a14370) Data frame received for 5\nI0909 00:44:03.076219    1758 log.go:172] (0xc00089a6e0) (5) Data frame handling\nI0909 00:44:03.076251    1758 log.go:172] (0xc0008b2000) (3) Data frame sent\nI0909 00:44:03.076271    1758 log.go:172] (0xc000a14370) Data frame received for 3\nI0909 00:44:03.076368    1758 log.go:172] (0xc0008b2000) (3) Data frame handling\nI0909 00:44:03.078410    1758 log.go:172] (0xc000a14370) Data frame received for 1\nI0909 00:44:03.078433    1758 log.go:172] (0xc00089a640) (1) Data frame handling\nI0909 00:44:03.078445    1758 log.go:172] (0xc00089a640) (1) Data frame sent\nI0909 00:44:03.078459    1758 log.go:172] (0xc000a14370) (0xc00089a640) Stream removed, broadcasting: 1\nI0909 00:44:03.078477    1758 log.go:172] (0xc000a14370) Go away received\nI0909 00:44:03.078961    1758 log.go:172] (0xc000a14370) (0xc00089a640) Stream removed, broadcasting: 1\nI0909 00:44:03.078990    1758 log.go:172] (0xc000a14370) (0xc0008b2000) Stream removed, broadcasting: 3\nI0909 00:44:03.079008    1758 log.go:172] (0xc000a14370) (0xc00089a6e0) Stream removed, broadcasting: 5\n"
Sep  9 00:44:03.083: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  9 00:44:03.083: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  9 00:44:03.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1986 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep  9 00:44:03.329: INFO: stderr: "I0909 00:44:03.221592    1778 log.go:172] (0xc000129080) (0xc0005e0be0) Create stream\nI0909 00:44:03.221652    1778 log.go:172] (0xc000129080) (0xc0005e0be0) Stream added, broadcasting: 1\nI0909 00:44:03.225955    1778 log.go:172] (0xc000129080) Reply frame received for 1\nI0909 00:44:03.225993    1778 log.go:172] (0xc000129080) (0xc0005e0320) Create stream\nI0909 00:44:03.226003    1778 log.go:172] (0xc000129080) (0xc0005e0320) Stream added, broadcasting: 3\nI0909 00:44:03.226904    1778 log.go:172] (0xc000129080) Reply frame received for 3\nI0909 00:44:03.226943    1778 log.go:172] (0xc000129080) (0xc0005e03c0) Create stream\nI0909 00:44:03.226963    1778 log.go:172] (0xc000129080) (0xc0005e03c0) Stream added, broadcasting: 5\nI0909 00:44:03.227813    1778 log.go:172] (0xc000129080) Reply frame received for 5\nI0909 00:44:03.285808    1778 log.go:172] (0xc000129080) Data frame received for 5\nI0909 00:44:03.285848    1778 log.go:172] (0xc0005e03c0) (5) Data frame handling\nI0909 00:44:03.285873    1778 log.go:172] (0xc0005e03c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0909 00:44:03.321791    1778 log.go:172] (0xc000129080) Data frame received for 3\nI0909 00:44:03.321827    1778 log.go:172] (0xc0005e0320) (3) Data frame handling\nI0909 00:44:03.321851    1778 log.go:172] (0xc0005e0320) (3) Data frame sent\nI0909 00:44:03.321870    1778 log.go:172] (0xc000129080) Data frame received for 3\nI0909 00:44:03.321893    1778 log.go:172] (0xc0005e0320) (3) Data frame handling\nI0909 00:44:03.322154    1778 log.go:172] (0xc000129080) Data frame received for 5\nI0909 00:44:03.322191    1778 log.go:172] (0xc0005e03c0) (5) Data frame handling\nI0909 00:44:03.324136    1778 log.go:172] (0xc000129080) Data frame received for 1\nI0909 00:44:03.324170    1778 log.go:172] (0xc0005e0be0) (1) Data frame handling\nI0909 00:44:03.324200    1778 log.go:172] (0xc0005e0be0) (1) Data frame sent\nI0909 00:44:03.324240    1778 log.go:172] (0xc000129080) (0xc0005e0be0) Stream removed, broadcasting: 1\nI0909 00:44:03.324267    1778 log.go:172] (0xc000129080) Go away received\nI0909 00:44:03.324644    1778 log.go:172] (0xc000129080) (0xc0005e0be0) Stream removed, broadcasting: 1\nI0909 00:44:03.324666    1778 log.go:172] (0xc000129080) (0xc0005e0320) Stream removed, broadcasting: 3\nI0909 00:44:03.324682    1778 log.go:172] (0xc000129080) (0xc0005e03c0) Stream removed, broadcasting: 5\n"
Sep  9 00:44:03.329: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep  9 00:44:03.329: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep  9 00:44:03.329: INFO: Waiting for statefulset status.replicas updated to 0
Sep  9 00:44:03.332: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Sep  9 00:44:13.341: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep  9 00:44:13.341: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Sep  9 00:44:13.341: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Sep  9 00:44:13.355: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999685s
Sep  9 00:44:14.359: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992873076s
Sep  9 00:44:15.365: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988207857s
Sep  9 00:44:16.370: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983090358s
Sep  9 00:44:17.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977923134s
Sep  9 00:44:18.380: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972918175s
Sep  9 00:44:19.385: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967715515s
Sep  9 00:44:20.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962840864s
Sep  9 00:44:21.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957434272s
Sep  9 00:44:22.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.949361ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1986
Sep  9 00:44:23.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1986 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:44:23.631: INFO: stderr: "I0909 00:44:23.562269    1799 log.go:172] (0xc000a36630) (0xc000626b40) Create stream\nI0909 00:44:23.562339    1799 log.go:172] (0xc000a36630) (0xc000626b40) Stream added, broadcasting: 1\nI0909 00:44:23.564836    1799 log.go:172] (0xc000a36630) Reply frame received for 1\nI0909 00:44:23.564893    1799 log.go:172] (0xc000a36630) (0xc000a64000) Create stream\nI0909 00:44:23.564913    1799 log.go:172] (0xc000a36630) (0xc000a64000) Stream added, broadcasting: 3\nI0909 00:44:23.566151    1799 log.go:172] (0xc000a36630) Reply frame received for 3\nI0909 00:44:23.566176    1799 log.go:172] (0xc000a36630) (0xc000a640a0) Create stream\nI0909 00:44:23.566184    1799 log.go:172] (0xc000a36630) (0xc000a640a0) Stream added, broadcasting: 5\nI0909 00:44:23.566983    1799 log.go:172] (0xc000a36630) Reply frame received for 5\nI0909 00:44:23.625380    1799 log.go:172] (0xc000a36630) Data frame received for 5\nI0909 00:44:23.625426    1799 log.go:172] (0xc000a640a0) (5) Data frame handling\nI0909 00:44:23.625439    1799 log.go:172] (0xc000a640a0) (5) Data frame sent\nI0909 00:44:23.625448    1799 log.go:172] (0xc000a36630) Data frame received for 5\nI0909 00:44:23.625455    1799 log.go:172] (0xc000a640a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0909 00:44:23.625477    1799 log.go:172] (0xc000a36630) Data frame received for 3\nI0909 00:44:23.625485    1799 log.go:172] (0xc000a64000) (3) Data frame handling\nI0909 00:44:23.625492    1799 log.go:172] (0xc000a64000) (3) Data frame sent\nI0909 00:44:23.625500    1799 log.go:172] (0xc000a36630) Data frame received for 3\nI0909 00:44:23.625506    1799 log.go:172] (0xc000a64000) (3) Data frame handling\nI0909 00:44:23.626814    1799 log.go:172] (0xc000a36630) Data frame received for 1\nI0909 00:44:23.626836    1799 log.go:172] (0xc000626b40) (1) Data frame handling\nI0909 00:44:23.626854    1799 log.go:172] (0xc000626b40) (1) Data frame sent\nI0909 00:44:23.626871    1799 log.go:172] (0xc000a36630) (0xc000626b40) Stream removed, broadcasting: 1\nI0909 00:44:23.626887    1799 log.go:172] (0xc000a36630) Go away received\nI0909 00:44:23.627237    1799 log.go:172] (0xc000a36630) (0xc000626b40) Stream removed, broadcasting: 1\nI0909 00:44:23.627258    1799 log.go:172] (0xc000a36630) (0xc000a64000) Stream removed, broadcasting: 3\nI0909 00:44:23.627267    1799 log.go:172] (0xc000a36630) (0xc000a640a0) Stream removed, broadcasting: 5\n"
Sep  9 00:44:23.632: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  9 00:44:23.632: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  9 00:44:23.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1986 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:44:23.841: INFO: stderr: "I0909 00:44:23.758558    1820 log.go:172] (0xc00093e580) (0xc00072c6e0) Create stream\nI0909 00:44:23.758618    1820 log.go:172] (0xc00093e580) (0xc00072c6e0) Stream added, broadcasting: 1\nI0909 00:44:23.762227    1820 log.go:172] (0xc00093e580) Reply frame received for 1\nI0909 00:44:23.762267    1820 log.go:172] (0xc00093e580) (0xc00072c000) Create stream\nI0909 00:44:23.762276    1820 log.go:172] (0xc00093e580) (0xc00072c000) Stream added, broadcasting: 3\nI0909 00:44:23.763121    1820 log.go:172] (0xc00093e580) Reply frame received for 3\nI0909 00:44:23.763170    1820 log.go:172] (0xc00093e580) (0xc000212140) Create stream\nI0909 00:44:23.763193    1820 log.go:172] (0xc00093e580) (0xc000212140) Stream added, broadcasting: 5\nI0909 00:44:23.763868    1820 log.go:172] (0xc00093e580) Reply frame received for 5\nI0909 00:44:23.835420    1820 log.go:172] (0xc00093e580) Data frame received for 3\nI0909 00:44:23.835450    1820 log.go:172] (0xc00072c000) (3) Data frame handling\nI0909 00:44:23.835463    1820 log.go:172] (0xc00072c000) (3) Data frame sent\nI0909 00:44:23.835469    1820 log.go:172] (0xc00093e580) Data frame received for 3\nI0909 00:44:23.835474    1820 log.go:172] (0xc00072c000) (3) Data frame handling\nI0909 00:44:23.835502    1820 log.go:172] (0xc00093e580) Data frame received for 5\nI0909 00:44:23.835510    1820 log.go:172] (0xc000212140) (5) Data frame handling\nI0909 00:44:23.835517    1820 log.go:172] (0xc000212140) (5) Data frame sent\nI0909 00:44:23.835524    1820 log.go:172] (0xc00093e580) Data frame received for 5\nI0909 00:44:23.835531    1820 log.go:172] (0xc000212140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0909 00:44:23.837011    1820 log.go:172] (0xc00093e580) Data frame received for 1\nI0909 00:44:23.837041    1820 log.go:172] (0xc00072c6e0) (1) Data frame handling\nI0909 00:44:23.837051    1820 log.go:172] (0xc00072c6e0) (1) Data frame sent\nI0909 00:44:23.837066    1820 log.go:172] (0xc00093e580) (0xc00072c6e0) Stream removed, broadcasting: 1\nI0909 00:44:23.837086    1820 log.go:172] (0xc00093e580) Go away received\nI0909 00:44:23.837395    1820 log.go:172] (0xc00093e580) (0xc00072c6e0) Stream removed, broadcasting: 1\nI0909 00:44:23.837411    1820 log.go:172] (0xc00093e580) (0xc00072c000) Stream removed, broadcasting: 3\nI0909 00:44:23.837418    1820 log.go:172] (0xc00093e580) (0xc000212140) Stream removed, broadcasting: 5\n"
Sep  9 00:44:23.841: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  9 00:44:23.841: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  9 00:44:23.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1986 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep  9 00:44:24.048: INFO: stderr: "I0909 00:44:23.977840    1842 log.go:172] (0xc000a2a630) (0xc00045aa00) Create stream\nI0909 00:44:23.977888    1842 log.go:172] (0xc000a2a630) (0xc00045aa00) Stream added, broadcasting: 1\nI0909 00:44:23.981267    1842 log.go:172] (0xc000a2a630) Reply frame received for 1\nI0909 00:44:23.981307    1842 log.go:172] (0xc000a2a630) (0xc0006a4000) Create stream\nI0909 00:44:23.981320    1842 log.go:172] (0xc000a2a630) (0xc0006a4000) Stream added, broadcasting: 3\nI0909 00:44:23.982248    1842 log.go:172] (0xc000a2a630) Reply frame received for 3\nI0909 00:44:23.982286    1842 log.go:172] (0xc000a2a630) (0xc00045a280) Create stream\nI0909 00:44:23.982300    1842 log.go:172] (0xc000a2a630) (0xc00045a280) Stream added, broadcasting: 5\nI0909 00:44:23.983154    1842 log.go:172] (0xc000a2a630) Reply frame received for 5\nI0909 00:44:24.042755    1842 log.go:172] (0xc000a2a630) Data frame received for 3\nI0909 00:44:24.042780    1842 log.go:172] (0xc0006a4000) (3) Data frame handling\nI0909 00:44:24.042798    1842 log.go:172] (0xc0006a4000) (3) Data frame sent\nI0909 00:44:24.042876    1842 log.go:172] (0xc000a2a630) Data frame received for 3\nI0909 00:44:24.042885    1842 log.go:172] (0xc0006a4000) (3) Data frame handling\nI0909 00:44:24.042927    1842 log.go:172] (0xc000a2a630) Data frame received for 5\nI0909 00:44:24.042954    1842 log.go:172] (0xc00045a280) (5) Data frame handling\nI0909 00:44:24.042974    1842 log.go:172] (0xc00045a280) (5) Data frame sent\nI0909 00:44:24.042987    1842 log.go:172] (0xc000a2a630) Data frame received for 5\nI0909 00:44:24.042997    1842 log.go:172] (0xc00045a280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0909 00:44:24.043937    1842 log.go:172] (0xc000a2a630) Data frame received for 1\nI0909 00:44:24.043952    1842 log.go:172] (0xc00045aa00) (1) Data frame handling\nI0909 00:44:24.043963    1842 log.go:172] (0xc00045aa00) (1) Data frame sent\nI0909 00:44:24.044240    1842 log.go:172] (0xc000a2a630) (0xc00045aa00) Stream removed, broadcasting: 1\nI0909 00:44:24.044285    1842 log.go:172] (0xc000a2a630) Go away received\nI0909 00:44:24.044532    1842 log.go:172] (0xc000a2a630) (0xc00045aa00) Stream removed, broadcasting: 1\nI0909 00:44:24.044552    1842 log.go:172] (0xc000a2a630) (0xc0006a4000) Stream removed, broadcasting: 3\nI0909 00:44:24.044561    1842 log.go:172] (0xc000a2a630) (0xc00045a280) Stream removed, broadcasting: 5\n"
Sep  9 00:44:24.049: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep  9 00:44:24.049: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep  9 00:44:24.049: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep  9 00:44:54.065: INFO: Deleting all statefulset in ns statefulset-1986
Sep  9 00:44:54.068: INFO: Scaling statefulset ss to 0
Sep  9 00:44:54.077: INFO: Waiting for statefulset status.replicas updated to 0
Sep  9 00:44:54.079: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:44:54.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1986" for this suite.
Sep  9 00:45:00.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:45:00.210: INFO: namespace statefulset-1986 deletion completed in 6.092247451s

• [SLOW TEST:98.329 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:45:00.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  9 00:45:00.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1064'
Sep  9 00:45:00.360: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep  9 00:45:00.360: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Sep  9 00:45:00.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-1064'
Sep  9 00:45:00.497: INFO: stderr: ""
Sep  9 00:45:00.497: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:45:00.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1064" for this suite.
Sep  9 00:45:22.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:45:22.576: INFO: namespace kubectl-1064 deletion completed in 22.075858049s

• [SLOW TEST:22.365 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:45:22.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0909 00:45:32.670401       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep  9 00:45:32.670: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:45:32.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8592" for this suite.
Sep  9 00:45:38.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:45:38.760: INFO: namespace gc-8592 deletion completed in 6.087010635s

• [SLOW TEST:16.184 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:45:38.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-22b1e7b6-6623-44bd-b2fc-c713d350d014
STEP: Creating a pod to test consume secrets
Sep  9 00:45:38.849: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-08f9e10d-eb6a-4b95-9647-7d9a81983a23" in namespace "projected-1140" to be "success or failure"
Sep  9 00:45:38.853: INFO: Pod "pod-projected-secrets-08f9e10d-eb6a-4b95-9647-7d9a81983a23": Phase="Pending", Reason="", readiness=false. Elapsed: 3.244987ms
Sep  9 00:45:40.856: INFO: Pod "pod-projected-secrets-08f9e10d-eb6a-4b95-9647-7d9a81983a23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006977823s
Sep  9 00:45:42.861: INFO: Pod "pod-projected-secrets-08f9e10d-eb6a-4b95-9647-7d9a81983a23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011444117s
STEP: Saw pod success
Sep  9 00:45:42.861: INFO: Pod "pod-projected-secrets-08f9e10d-eb6a-4b95-9647-7d9a81983a23" satisfied condition "success or failure"
Sep  9 00:45:42.864: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-08f9e10d-eb6a-4b95-9647-7d9a81983a23 container projected-secret-volume-test: 
STEP: delete the pod
Sep  9 00:45:42.898: INFO: Waiting for pod pod-projected-secrets-08f9e10d-eb6a-4b95-9647-7d9a81983a23 to disappear
Sep  9 00:45:42.935: INFO: Pod pod-projected-secrets-08f9e10d-eb6a-4b95-9647-7d9a81983a23 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:45:42.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1140" for this suite.
Sep  9 00:45:48.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:45:49.064: INFO: namespace projected-1140 deletion completed in 6.125297215s

• [SLOW TEST:10.304 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:45:49.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0909 00:45:50.208787       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep  9 00:45:50.208: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:45:50.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-601" for this suite.
Sep  9 00:45:56.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:45:56.324: INFO: namespace gc-601 deletion completed in 6.111803774s

• [SLOW TEST:7.259 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:45:56.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-5e6c5a0a-92ab-435b-8911-e23f52aae4c0
STEP: Creating a pod to test consume configMaps
Sep  9 00:45:56.395: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e15516c8-7079-4659-9abd-fcb026d9afac" in namespace "projected-7340" to be "success or failure"
Sep  9 00:45:56.399: INFO: Pod "pod-projected-configmaps-e15516c8-7079-4659-9abd-fcb026d9afac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.938146ms
Sep  9 00:45:58.402: INFO: Pod "pod-projected-configmaps-e15516c8-7079-4659-9abd-fcb026d9afac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007659421s
Sep  9 00:46:00.407: INFO: Pod "pod-projected-configmaps-e15516c8-7079-4659-9abd-fcb026d9afac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012405578s
STEP: Saw pod success
Sep  9 00:46:00.407: INFO: Pod "pod-projected-configmaps-e15516c8-7079-4659-9abd-fcb026d9afac" satisfied condition "success or failure"
Sep  9 00:46:00.410: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-e15516c8-7079-4659-9abd-fcb026d9afac container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 00:46:00.584: INFO: Waiting for pod pod-projected-configmaps-e15516c8-7079-4659-9abd-fcb026d9afac to disappear
Sep  9 00:46:00.614: INFO: Pod pod-projected-configmaps-e15516c8-7079-4659-9abd-fcb026d9afac no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:46:00.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7340" for this suite.
Sep  9 00:46:06.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:46:06.752: INFO: namespace projected-7340 deletion completed in 6.133416338s

• [SLOW TEST:10.428 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:46:06.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-7288
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-7288
STEP: Deleting pre-stop pod
Sep  9 00:46:19.909: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:46:19.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-7288" for this suite.
Sep  9 00:46:54.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:46:54.078: INFO: namespace prestop-7288 deletion completed in 34.121422084s

• [SLOW TEST:47.326 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:46:54.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Sep  9 00:46:54.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-613'
Sep  9 00:46:54.430: INFO: stderr: ""
Sep  9 00:46:54.430: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep  9 00:46:55.435: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:46:55.435: INFO: Found 0 / 1
Sep  9 00:46:56.435: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:46:56.435: INFO: Found 0 / 1
Sep  9 00:46:57.435: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:46:57.435: INFO: Found 0 / 1
Sep  9 00:46:58.435: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:46:58.435: INFO: Found 1 / 1
Sep  9 00:46:58.435: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Sep  9 00:46:58.438: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:46:58.439: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep  9 00:46:58.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-4k8cl --namespace=kubectl-613 -p {"metadata":{"annotations":{"x":"y"}}}'
Sep  9 00:46:58.532: INFO: stderr: ""
Sep  9 00:46:58.532: INFO: stdout: "pod/redis-master-4k8cl patched\n"
STEP: checking annotations
Sep  9 00:46:58.537: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:46:58.537: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:46:58.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-613" for this suite.
Sep  9 00:47:20.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:47:20.626: INFO: namespace kubectl-613 deletion completed in 22.084885393s

• [SLOW TEST:26.547 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:47:20.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Sep  9 00:47:20.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2323'
Sep  9 00:47:23.485: INFO: stderr: ""
Sep  9 00:47:23.485: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Sep  9 00:47:24.489: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:47:24.489: INFO: Found 0 / 1
Sep  9 00:47:25.509: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:47:25.509: INFO: Found 0 / 1
Sep  9 00:47:26.490: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:47:26.490: INFO: Found 0 / 1
Sep  9 00:47:27.490: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:47:27.490: INFO: Found 1 / 1
Sep  9 00:47:27.490: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep  9 00:47:27.497: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:47:27.497: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Sep  9 00:47:27.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t82gd redis-master --namespace=kubectl-2323'
Sep  9 00:47:27.606: INFO: stderr: ""
Sep  9 00:47:27.607: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Sep 00:47:26.544 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Sep 00:47:26.544 # Server started, Redis version 3.2.12\n1:M 09 Sep 00:47:26.544 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Sep 00:47:26.544 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Sep  9 00:47:27.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t82gd redis-master --namespace=kubectl-2323 --tail=1'
Sep  9 00:47:27.703: INFO: stderr: ""
Sep  9 00:47:27.703: INFO: stdout: "1:M 09 Sep 00:47:26.544 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Sep  9 00:47:27.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t82gd redis-master --namespace=kubectl-2323 --limit-bytes=1'
Sep  9 00:47:27.798: INFO: stderr: ""
Sep  9 00:47:27.798: INFO: stdout: " "
STEP: exposing timestamps
Sep  9 00:47:27.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t82gd redis-master --namespace=kubectl-2323 --tail=1 --timestamps'
Sep  9 00:47:27.914: INFO: stderr: ""
Sep  9 00:47:27.914: INFO: stdout: "2020-09-09T00:47:26.544718871Z 1:M 09 Sep 00:47:26.544 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Sep  9 00:47:30.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t82gd redis-master --namespace=kubectl-2323 --since=1s'
Sep  9 00:47:30.526: INFO: stderr: ""
Sep  9 00:47:30.526: INFO: stdout: ""
Sep  9 00:47:30.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t82gd redis-master --namespace=kubectl-2323 --since=24h'
Sep  9 00:47:30.633: INFO: stderr: ""
Sep  9 00:47:30.633: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Sep 00:47:26.544 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Sep 00:47:26.544 # Server started, Redis version 3.2.12\n1:M 09 Sep 00:47:26.544 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Sep 00:47:26.544 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Sep  9 00:47:30.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2323'
Sep  9 00:47:30.733: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 00:47:30.733: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Sep  9 00:47:30.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2323'
Sep  9 00:47:30.833: INFO: stderr: "No resources found.\n"
Sep  9 00:47:30.833: INFO: stdout: ""
Sep  9 00:47:30.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2323 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep  9 00:47:30.919: INFO: stderr: ""
Sep  9 00:47:30.919: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:47:30.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2323" for this suite.
Sep  9 00:47:53.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:47:53.248: INFO: namespace kubectl-2323 deletion completed in 22.325469926s

• [SLOW TEST:32.621 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:47:53.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  9 00:47:53.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6459'
Sep  9 00:47:53.460: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep  9 00:47:53.460: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Sep  9 00:47:53.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6459'
Sep  9 00:47:53.613: INFO: stderr: ""
Sep  9 00:47:53.614: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:47:53.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6459" for this suite.
Sep  9 00:47:59.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:47:59.749: INFO: namespace kubectl-6459 deletion completed in 6.132484759s

• [SLOW TEST:6.501 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:47:59.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:47:59.880: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"35b42685-0bad-429c-a720-e740f9312a97", Controller:(*bool)(0xc0010af562), BlockOwnerDeletion:(*bool)(0xc0010af563)}}
Sep  9 00:47:59.919: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b1ca6240-64b2-4bbf-b970-f1d70b9b1dbb", Controller:(*bool)(0xc002b71e72), BlockOwnerDeletion:(*bool)(0xc002b71e73)}}
Sep  9 00:47:59.931: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"dffba554-9e91-48c5-a5a1-dd27c4ef6d82", Controller:(*bool)(0xc0010af70a), BlockOwnerDeletion:(*bool)(0xc0010af70b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:48:04.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3599" for this suite.
Sep  9 00:48:10.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:48:11.072: INFO: namespace gc-3599 deletion completed in 6.128991531s

• [SLOW TEST:11.322 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:48:11.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Sep  9 00:48:11.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7707'
Sep  9 00:48:11.419: INFO: stderr: ""
Sep  9 00:48:11.419: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  9 00:48:11.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707'
Sep  9 00:48:11.563: INFO: stderr: ""
Sep  9 00:48:11.563: INFO: stdout: "update-demo-nautilus-c9nmr update-demo-nautilus-qx5r8 "
Sep  9 00:48:11.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9nmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:11.650: INFO: stderr: ""
Sep  9 00:48:11.650: INFO: stdout: ""
Sep  9 00:48:11.650: INFO: update-demo-nautilus-c9nmr is created but not running
Sep  9 00:48:16.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707'
Sep  9 00:48:16.756: INFO: stderr: ""
Sep  9 00:48:16.756: INFO: stdout: "update-demo-nautilus-c9nmr update-demo-nautilus-qx5r8 "
Sep  9 00:48:16.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9nmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:16.856: INFO: stderr: ""
Sep  9 00:48:16.856: INFO: stdout: "true"
Sep  9 00:48:16.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9nmr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:16.951: INFO: stderr: ""
Sep  9 00:48:16.951: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 00:48:16.951: INFO: validating pod update-demo-nautilus-c9nmr
Sep  9 00:48:16.955: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 00:48:16.955: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 00:48:16.955: INFO: update-demo-nautilus-c9nmr is verified up and running
Sep  9 00:48:16.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qx5r8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:17.044: INFO: stderr: ""
Sep  9 00:48:17.044: INFO: stdout: "true"
Sep  9 00:48:17.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qx5r8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:17.136: INFO: stderr: ""
Sep  9 00:48:17.136: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 00:48:17.136: INFO: validating pod update-demo-nautilus-qx5r8
Sep  9 00:48:17.140: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 00:48:17.140: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 00:48:17.140: INFO: update-demo-nautilus-qx5r8 is verified up and running
STEP: scaling down the replication controller
Sep  9 00:48:17.143: INFO: scanned /root for discovery docs: 
Sep  9 00:48:17.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7707'
Sep  9 00:48:18.259: INFO: stderr: ""
Sep  9 00:48:18.259: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  9 00:48:18.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707'
Sep  9 00:48:18.357: INFO: stderr: ""
Sep  9 00:48:18.357: INFO: stdout: "update-demo-nautilus-c9nmr update-demo-nautilus-qx5r8 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Sep  9 00:48:23.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707'
Sep  9 00:48:23.458: INFO: stderr: ""
Sep  9 00:48:23.458: INFO: stdout: "update-demo-nautilus-c9nmr "
Sep  9 00:48:23.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9nmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:23.545: INFO: stderr: ""
Sep  9 00:48:23.545: INFO: stdout: "true"
Sep  9 00:48:23.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9nmr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:23.628: INFO: stderr: ""
Sep  9 00:48:23.628: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 00:48:23.628: INFO: validating pod update-demo-nautilus-c9nmr
Sep  9 00:48:23.631: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 00:48:23.631: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 00:48:23.631: INFO: update-demo-nautilus-c9nmr is verified up and running
STEP: scaling up the replication controller
Sep  9 00:48:23.634: INFO: scanned /root for discovery docs: 
Sep  9 00:48:23.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7707'
Sep  9 00:48:24.746: INFO: stderr: ""
Sep  9 00:48:24.746: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  9 00:48:24.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707'
Sep  9 00:48:24.833: INFO: stderr: ""
Sep  9 00:48:24.833: INFO: stdout: "update-demo-nautilus-b7d9v update-demo-nautilus-c9nmr "
Sep  9 00:48:24.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7d9v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:24.921: INFO: stderr: ""
Sep  9 00:48:24.921: INFO: stdout: ""
Sep  9 00:48:24.921: INFO: update-demo-nautilus-b7d9v is created but not running
Sep  9 00:48:29.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707'
Sep  9 00:48:30.026: INFO: stderr: ""
Sep  9 00:48:30.026: INFO: stdout: "update-demo-nautilus-b7d9v update-demo-nautilus-c9nmr "
Sep  9 00:48:30.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7d9v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:30.118: INFO: stderr: ""
Sep  9 00:48:30.118: INFO: stdout: "true"
Sep  9 00:48:30.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7d9v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:30.208: INFO: stderr: ""
Sep  9 00:48:30.208: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 00:48:30.208: INFO: validating pod update-demo-nautilus-b7d9v
Sep  9 00:48:30.212: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 00:48:30.212: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 00:48:30.212: INFO: update-demo-nautilus-b7d9v is verified up and running
Sep  9 00:48:30.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9nmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:30.301: INFO: stderr: ""
Sep  9 00:48:30.301: INFO: stdout: "true"
Sep  9 00:48:30.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9nmr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7707'
Sep  9 00:48:30.392: INFO: stderr: ""
Sep  9 00:48:30.392: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 00:48:30.392: INFO: validating pod update-demo-nautilus-c9nmr
Sep  9 00:48:30.395: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 00:48:30.395: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 00:48:30.395: INFO: update-demo-nautilus-c9nmr is verified up and running
STEP: using delete to clean up resources
Sep  9 00:48:30.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7707'
Sep  9 00:48:30.506: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 00:48:30.506: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Sep  9 00:48:30.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7707'
Sep  9 00:48:30.601: INFO: stderr: "No resources found.\n"
Sep  9 00:48:30.601: INFO: stdout: ""
Sep  9 00:48:30.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7707 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep  9 00:48:30.689: INFO: stderr: ""
Sep  9 00:48:30.689: INFO: stdout: "update-demo-nautilus-b7d9v\nupdate-demo-nautilus-c9nmr\n"
Sep  9 00:48:31.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7707'
Sep  9 00:48:31.302: INFO: stderr: "No resources found.\n"
Sep  9 00:48:31.302: INFO: stdout: ""
Sep  9 00:48:31.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7707 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep  9 00:48:31.406: INFO: stderr: ""
Sep  9 00:48:31.406: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:48:31.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7707" for this suite.
Sep  9 00:48:53.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:48:53.689: INFO: namespace kubectl-7707 deletion completed in 22.279050317s

• [SLOW TEST:42.617 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:48:53.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:48:53.729: INFO: Creating deployment "test-recreate-deployment"
Sep  9 00:48:53.739: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Sep  9 00:48:53.793: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Sep  9 00:48:55.835: INFO: Waiting deployment "test-recreate-deployment" to complete
Sep  9 00:48:55.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735209333, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735209333, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735209333, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735209333, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 00:48:57.855: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Sep  9 00:48:57.861: INFO: Updating deployment test-recreate-deployment
Sep  9 00:48:57.861: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep  9 00:48:58.586: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5136,SelfLink:/apis/apps/v1/namespaces/deployment-5136/deployments/test-recreate-deployment,UID:f91e9ef8-f026-4719-8fc6-68d4b87672f7,ResourceVersion:325809,Generation:2,CreationTimestamp:2020-09-09 00:48:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-09-09 00:48:58 +0000 UTC 2020-09-09 00:48:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-09-09 00:48:58 +0000 UTC 2020-09-09 00:48:53 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Sep  9 00:48:58.590: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5136,SelfLink:/apis/apps/v1/namespaces/deployment-5136/replicasets/test-recreate-deployment-5c8c9cc69d,UID:035ac750-075d-479c-93e2-fd3a30e93f45,ResourceVersion:325806,Generation:1,CreationTimestamp:2020-09-09 00:48:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f91e9ef8-f026-4719-8fc6-68d4b87672f7 0xc003a07ba7 0xc003a07ba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  9 00:48:58.590: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Sep  9 00:48:58.590: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5136,SelfLink:/apis/apps/v1/namespaces/deployment-5136/replicasets/test-recreate-deployment-6df85df6b9,UID:e7786b42-7944-4044-8798-26ed61a65618,ResourceVersion:325797,Generation:2,CreationTimestamp:2020-09-09 00:48:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f91e9ef8-f026-4719-8fc6-68d4b87672f7 0xc003a07c87 0xc003a07c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  9 00:48:58.600: INFO: Pod "test-recreate-deployment-5c8c9cc69d-2d9rv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-2d9rv,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5136,SelfLink:/api/v1/namespaces/deployment-5136/pods/test-recreate-deployment-5c8c9cc69d-2d9rv,UID:f349bd0c-844a-4539-8216-4b19fc43f6fb,ResourceVersion:325810,Generation:0,CreationTimestamp:2020-09-09 00:48:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 035ac750-075d-479c-93e2-fd3a30e93f45 0xc003a46587 0xc003a46588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xscd4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xscd4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xscd4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a46600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a46620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:48:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:48:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:48:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 00:48:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-09-09 00:48:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:48:58.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5136" for this suite.
Sep  9 00:49:04.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:49:04.762: INFO: namespace deployment-5136 deletion completed in 6.158559376s

• [SLOW TEST:11.073 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:49:04.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:49:04.945: INFO: Create a RollingUpdate DaemonSet
Sep  9 00:49:04.952: INFO: Check that daemon pods launch on every node of the cluster
Sep  9 00:49:04.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:04.977: INFO: Number of nodes with available pods: 0
Sep  9 00:49:04.977: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:49:06.059: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:06.062: INFO: Number of nodes with available pods: 0
Sep  9 00:49:06.062: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:49:07.060: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:07.063: INFO: Number of nodes with available pods: 0
Sep  9 00:49:07.063: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:49:08.029: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:08.033: INFO: Number of nodes with available pods: 0
Sep  9 00:49:08.033: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:49:08.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:08.985: INFO: Number of nodes with available pods: 2
Sep  9 00:49:08.985: INFO: Number of running nodes: 2, number of available pods: 2
Sep  9 00:49:08.985: INFO: Update the DaemonSet to trigger a rollout
Sep  9 00:49:08.991: INFO: Updating DaemonSet daemon-set
Sep  9 00:49:24.025: INFO: Roll back the DaemonSet before rollout is complete
Sep  9 00:49:24.031: INFO: Updating DaemonSet daemon-set
Sep  9 00:49:24.032: INFO: Make sure DaemonSet rollback is complete
Sep  9 00:49:24.045: INFO: Wrong image for pod: daemon-set-mgrqt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Sep  9 00:49:24.045: INFO: Pod daemon-set-mgrqt is not available
Sep  9 00:49:24.068: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:25.071: INFO: Wrong image for pod: daemon-set-mgrqt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Sep  9 00:49:25.071: INFO: Pod daemon-set-mgrqt is not available
Sep  9 00:49:25.074: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:26.088: INFO: Wrong image for pod: daemon-set-mgrqt. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Sep  9 00:49:26.088: INFO: Pod daemon-set-mgrqt is not available
Sep  9 00:49:26.091: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:27.073: INFO: Pod daemon-set-254d4 is not available
Sep  9 00:49:27.076: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2887, will wait for the garbage collector to delete the pods
Sep  9 00:49:27.142: INFO: Deleting DaemonSet.extensions daemon-set took: 6.289612ms
Sep  9 00:49:27.243: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.251876ms
Sep  9 00:49:33.746: INFO: Number of nodes with available pods: 0
Sep  9 00:49:33.746: INFO: Number of running nodes: 0, number of available pods: 0
Sep  9 00:49:33.749: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2887/daemonsets","resourceVersion":"325978"},"items":null}

Sep  9 00:49:33.752: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2887/pods","resourceVersion":"325978"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:49:33.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2887" for this suite.
Sep  9 00:49:39.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:49:39.862: INFO: namespace daemonsets-2887 deletion completed in 6.093957121s

• [SLOW TEST:35.100 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:49:39.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep  9 00:49:39.961: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:39.966: INFO: Number of nodes with available pods: 0
Sep  9 00:49:39.966: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:49:41.060: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:41.068: INFO: Number of nodes with available pods: 0
Sep  9 00:49:41.068: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:49:42.109: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:42.222: INFO: Number of nodes with available pods: 0
Sep  9 00:49:42.222: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:49:43.061: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:43.064: INFO: Number of nodes with available pods: 0
Sep  9 00:49:43.064: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:49:43.971: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:43.974: INFO: Number of nodes with available pods: 1
Sep  9 00:49:43.974: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:49:44.971: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:44.974: INFO: Number of nodes with available pods: 2
Sep  9 00:49:44.974: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Sep  9 00:49:44.993: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:49:44.998: INFO: Number of nodes with available pods: 2
Sep  9 00:49:44.998: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9286, will wait for the garbage collector to delete the pods
Sep  9 00:49:46.108: INFO: Deleting DaemonSet.extensions daemon-set took: 5.501391ms
Sep  9 00:49:46.408: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.253927ms
Sep  9 00:49:53.711: INFO: Number of nodes with available pods: 0
Sep  9 00:49:53.711: INFO: Number of running nodes: 0, number of available pods: 0
Sep  9 00:49:53.714: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9286/daemonsets","resourceVersion":"326099"},"items":null}

Sep  9 00:49:53.717: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9286/pods","resourceVersion":"326099"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:49:53.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9286" for this suite.
Sep  9 00:49:59.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:49:59.823: INFO: namespace daemonsets-9286 deletion completed in 6.092153756s

• [SLOW TEST:19.959 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:49:59.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-3fa99ec4-ec6b-43e3-84b0-78a6455994ca
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:49:59.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6625" for this suite.
Sep  9 00:50:05.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:50:05.985: INFO: namespace configmap-6625 deletion completed in 6.083420033s

• [SLOW TEST:6.162 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:50:05.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-bd78a078-d220-4e05-abcb-abce556d8d1e
STEP: Creating a pod to test consume configMaps
Sep  9 00:50:06.053: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b17c18e-a327-4ebe-9df0-2469f0de99c7" in namespace "projected-9004" to be "success or failure"
Sep  9 00:50:06.056: INFO: Pod "pod-projected-configmaps-7b17c18e-a327-4ebe-9df0-2469f0de99c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.020835ms
Sep  9 00:50:08.077: INFO: Pod "pod-projected-configmaps-7b17c18e-a327-4ebe-9df0-2469f0de99c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024057737s
Sep  9 00:50:10.082: INFO: Pod "pod-projected-configmaps-7b17c18e-a327-4ebe-9df0-2469f0de99c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028340152s
STEP: Saw pod success
Sep  9 00:50:10.082: INFO: Pod "pod-projected-configmaps-7b17c18e-a327-4ebe-9df0-2469f0de99c7" satisfied condition "success or failure"
Sep  9 00:50:10.085: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-7b17c18e-a327-4ebe-9df0-2469f0de99c7 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 00:50:10.122: INFO: Waiting for pod pod-projected-configmaps-7b17c18e-a327-4ebe-9df0-2469f0de99c7 to disappear
Sep  9 00:50:10.134: INFO: Pod pod-projected-configmaps-7b17c18e-a327-4ebe-9df0-2469f0de99c7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:50:10.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9004" for this suite.
Sep  9 00:50:16.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:50:16.235: INFO: namespace projected-9004 deletion completed in 6.097248015s

• [SLOW TEST:10.250 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:50:16.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5550.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5550.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5550.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5550.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5550.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5550.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep  9 00:50:22.361: INFO: DNS probes using dns-5550/dns-test-36bc783f-ea4d-42be-ab03-c6c92eafee04 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:50:22.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5550" for this suite.
Sep  9 00:50:28.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:50:28.551: INFO: namespace dns-5550 deletion completed in 6.143110795s

• [SLOW TEST:12.315 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:50:28.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Sep  9 00:50:28.594: INFO: Waiting up to 5m0s for pod "downward-api-b526a139-f4e2-4830-9056-8b493148ec18" in namespace "downward-api-8328" to be "success or failure"
Sep  9 00:50:28.624: INFO: Pod "downward-api-b526a139-f4e2-4830-9056-8b493148ec18": Phase="Pending", Reason="", readiness=false. Elapsed: 30.39449ms
Sep  9 00:50:30.628: INFO: Pod "downward-api-b526a139-f4e2-4830-9056-8b493148ec18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034290862s
Sep  9 00:50:32.632: INFO: Pod "downward-api-b526a139-f4e2-4830-9056-8b493148ec18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038559435s
STEP: Saw pod success
Sep  9 00:50:32.632: INFO: Pod "downward-api-b526a139-f4e2-4830-9056-8b493148ec18" satisfied condition "success or failure"
Sep  9 00:50:32.635: INFO: Trying to get logs from node iruya-worker2 pod downward-api-b526a139-f4e2-4830-9056-8b493148ec18 container dapi-container: 
STEP: delete the pod
Sep  9 00:50:32.696: INFO: Waiting for pod downward-api-b526a139-f4e2-4830-9056-8b493148ec18 to disappear
Sep  9 00:50:32.712: INFO: Pod downward-api-b526a139-f4e2-4830-9056-8b493148ec18 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:50:32.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8328" for this suite.
Sep  9 00:50:38.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:50:38.827: INFO: namespace downward-api-8328 deletion completed in 6.112074488s

• [SLOW TEST:10.275 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:50:38.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4012
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4012
STEP: Creating statefulset with conflicting port in namespace statefulset-4012
STEP: Waiting until pod test-pod will start running in namespace statefulset-4012
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4012
Sep  9 00:50:42.965: INFO: Observed stateful pod in namespace: statefulset-4012, name: ss-0, uid: baf5b91a-47f4-4a41-bc01-f2569a65e718, status phase: Failed. Waiting for statefulset controller to delete.
Sep  9 00:50:43.018: INFO: Observed stateful pod in namespace: statefulset-4012, name: ss-0, uid: baf5b91a-47f4-4a41-bc01-f2569a65e718, status phase: Failed. Waiting for statefulset controller to delete.
Sep  9 00:50:43.021: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4012
STEP: Removing pod with conflicting port in namespace statefulset-4012
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4012 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep  9 00:50:49.127: INFO: Deleting all statefulset in ns statefulset-4012
Sep  9 00:50:49.131: INFO: Scaling statefulset ss to 0
Sep  9 00:51:09.196: INFO: Waiting for statefulset status.replicas updated to 0
Sep  9 00:51:09.199: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:51:09.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4012" for this suite.
Sep  9 00:51:15.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:51:15.298: INFO: namespace statefulset-4012 deletion completed in 6.083671171s

• [SLOW TEST:36.470 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:51:15.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 00:51:15.366: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edcc98b5-455b-4492-997a-361e7782901f" in namespace "downward-api-424" to be "success or failure"
Sep  9 00:51:15.370: INFO: Pod "downwardapi-volume-edcc98b5-455b-4492-997a-361e7782901f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.081203ms
Sep  9 00:51:17.374: INFO: Pod "downwardapi-volume-edcc98b5-455b-4492-997a-361e7782901f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007118638s
Sep  9 00:51:19.378: INFO: Pod "downwardapi-volume-edcc98b5-455b-4492-997a-361e7782901f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011199335s
STEP: Saw pod success
Sep  9 00:51:19.378: INFO: Pod "downwardapi-volume-edcc98b5-455b-4492-997a-361e7782901f" satisfied condition "success or failure"
Sep  9 00:51:19.381: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-edcc98b5-455b-4492-997a-361e7782901f container client-container: 
STEP: delete the pod
Sep  9 00:51:19.401: INFO: Waiting for pod downwardapi-volume-edcc98b5-455b-4492-997a-361e7782901f to disappear
Sep  9 00:51:19.414: INFO: Pod downwardapi-volume-edcc98b5-455b-4492-997a-361e7782901f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:51:19.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-424" for this suite.
Sep  9 00:51:25.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:51:25.523: INFO: namespace downward-api-424 deletion completed in 6.106144831s

• [SLOW TEST:10.225 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:51:25.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep  9 00:51:25.612: INFO: Waiting up to 5m0s for pod "pod-833c078f-d3ca-4c73-ac7d-39fdb585e753" in namespace "emptydir-9241" to be "success or failure"
Sep  9 00:51:25.615: INFO: Pod "pod-833c078f-d3ca-4c73-ac7d-39fdb585e753": Phase="Pending", Reason="", readiness=false. Elapsed: 2.590366ms
Sep  9 00:51:27.678: INFO: Pod "pod-833c078f-d3ca-4c73-ac7d-39fdb585e753": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06570979s
Sep  9 00:51:29.682: INFO: Pod "pod-833c078f-d3ca-4c73-ac7d-39fdb585e753": Phase="Running", Reason="", readiness=true. Elapsed: 4.069955247s
Sep  9 00:51:31.687: INFO: Pod "pod-833c078f-d3ca-4c73-ac7d-39fdb585e753": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074026715s
STEP: Saw pod success
Sep  9 00:51:31.687: INFO: Pod "pod-833c078f-d3ca-4c73-ac7d-39fdb585e753" satisfied condition "success or failure"
Sep  9 00:51:31.689: INFO: Trying to get logs from node iruya-worker pod pod-833c078f-d3ca-4c73-ac7d-39fdb585e753 container test-container: 
STEP: delete the pod
Sep  9 00:51:31.709: INFO: Waiting for pod pod-833c078f-d3ca-4c73-ac7d-39fdb585e753 to disappear
Sep  9 00:51:31.731: INFO: Pod pod-833c078f-d3ca-4c73-ac7d-39fdb585e753 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:51:31.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9241" for this suite.
Sep  9 00:51:37.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:51:37.821: INFO: namespace emptydir-9241 deletion completed in 6.086063623s

• [SLOW TEST:12.297 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:51:37.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Sep  9 00:51:37.874: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:51:53.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2055" for this suite.
Sep  9 00:51:59.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:51:59.752: INFO: namespace pods-2055 deletion completed in 6.100206638s

• [SLOW TEST:21.931 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:51:59.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Sep  9 00:51:59.830: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep  9 00:51:59.837: INFO: Waiting for terminating namespaces to be deleted...
Sep  9 00:51:59.840: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Sep  9 00:51:59.845: INFO: kindnet-l8ltc from kube-system started at 2020-09-07 19:17:06 +0000 UTC (1 container statuses recorded)
Sep  9 00:51:59.845: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  9 00:51:59.845: INFO: kube-proxy-7tdlb from kube-system started at 2020-09-07 19:17:06 +0000 UTC (1 container statuses recorded)
Sep  9 00:51:59.845: INFO: 	Container kube-proxy ready: true, restart count 0
Sep  9 00:51:59.845: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Sep  9 00:51:59.851: INFO: kube-proxy-hwdzp from kube-system started at 2020-09-07 19:16:55 +0000 UTC (1 container statuses recorded)
Sep  9 00:51:59.851: INFO: 	Container kube-proxy ready: true, restart count 0
Sep  9 00:51:59.851: INFO: kindnet-mnblj from kube-system started at 2020-09-07 19:16:56 +0000 UTC (1 container statuses recorded)
Sep  9 00:51:59.851: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  9 00:51:59.851: INFO: coredns-5d4dd4b4db-25mzm from kube-system started at 2020-09-07 19:17:27 +0000 UTC (1 container statuses recorded)
Sep  9 00:51:59.851: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.1632f74b29e1375d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:52:00.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7001" for this suite.
Sep  9 00:52:06.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:52:07.015: INFO: namespace sched-pred-7001 deletion completed in 6.141481825s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.262 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:52:07.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:52:07.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-270" for this suite.
Sep  9 00:52:13.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:52:13.275: INFO: namespace kubelet-test-270 deletion completed in 6.07683242s

• [SLOW TEST:6.259 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:52:13.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-ztjk
STEP: Creating a pod to test atomic-volume-subpath
Sep  9 00:52:13.387: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-ztjk" in namespace "subpath-7967" to be "success or failure"
Sep  9 00:52:13.398: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Pending", Reason="", readiness=false. Elapsed: 11.538296ms
Sep  9 00:52:15.463: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076546967s
Sep  9 00:52:17.468: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Running", Reason="", readiness=true. Elapsed: 4.080997672s
Sep  9 00:52:19.471: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Running", Reason="", readiness=true. Elapsed: 6.084542469s
Sep  9 00:52:21.475: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Running", Reason="", readiness=true. Elapsed: 8.088343392s
Sep  9 00:52:23.479: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Running", Reason="", readiness=true. Elapsed: 10.092317562s
Sep  9 00:52:25.483: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Running", Reason="", readiness=true. Elapsed: 12.096499731s
Sep  9 00:52:27.488: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Running", Reason="", readiness=true. Elapsed: 14.100852204s
Sep  9 00:52:29.492: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Running", Reason="", readiness=true. Elapsed: 16.10540377s
Sep  9 00:52:31.506: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Running", Reason="", readiness=true. Elapsed: 18.118924983s
Sep  9 00:52:33.510: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Running", Reason="", readiness=true. Elapsed: 20.12328113s
Sep  9 00:52:35.514: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Running", Reason="", readiness=true. Elapsed: 22.127377876s
Sep  9 00:52:37.519: INFO: Pod "pod-subpath-test-projected-ztjk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.13175548s
STEP: Saw pod success
Sep  9 00:52:37.519: INFO: Pod "pod-subpath-test-projected-ztjk" satisfied condition "success or failure"
Sep  9 00:52:37.522: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-ztjk container test-container-subpath-projected-ztjk: 
STEP: delete the pod
Sep  9 00:52:37.558: INFO: Waiting for pod pod-subpath-test-projected-ztjk to disappear
Sep  9 00:52:37.577: INFO: Pod pod-subpath-test-projected-ztjk no longer exists
STEP: Deleting pod pod-subpath-test-projected-ztjk
Sep  9 00:52:37.578: INFO: Deleting pod "pod-subpath-test-projected-ztjk" in namespace "subpath-7967"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:52:37.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7967" for this suite.
Sep  9 00:52:43.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:52:43.693: INFO: namespace subpath-7967 deletion completed in 6.106385989s

• [SLOW TEST:30.418 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:52:43.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-dceaed9d-a83f-47e9-9868-e7117a250ee4
STEP: Creating a pod to test consume secrets
Sep  9 00:52:43.775: INFO: Waiting up to 5m0s for pod "pod-secrets-951334d4-8ed7-4625-a88b-9833bdce188b" in namespace "secrets-2651" to be "success or failure"
Sep  9 00:52:43.778: INFO: Pod "pod-secrets-951334d4-8ed7-4625-a88b-9833bdce188b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.372309ms
Sep  9 00:52:45.817: INFO: Pod "pod-secrets-951334d4-8ed7-4625-a88b-9833bdce188b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042167065s
Sep  9 00:52:47.821: INFO: Pod "pod-secrets-951334d4-8ed7-4625-a88b-9833bdce188b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046499814s
STEP: Saw pod success
Sep  9 00:52:47.821: INFO: Pod "pod-secrets-951334d4-8ed7-4625-a88b-9833bdce188b" satisfied condition "success or failure"
Sep  9 00:52:47.824: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-951334d4-8ed7-4625-a88b-9833bdce188b container secret-volume-test: 
STEP: delete the pod
Sep  9 00:52:47.887: INFO: Waiting for pod pod-secrets-951334d4-8ed7-4625-a88b-9833bdce188b to disappear
Sep  9 00:52:47.892: INFO: Pod pod-secrets-951334d4-8ed7-4625-a88b-9833bdce188b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:52:47.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2651" for this suite.
Sep  9 00:52:53.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:52:54.043: INFO: namespace secrets-2651 deletion completed in 6.147352222s

• [SLOW TEST:10.349 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:52:54.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-98fb6ba9-a0f4-4dca-a430-3406ada9e50d
STEP: Creating a pod to test consume configMaps
Sep  9 00:52:54.121: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3a669da-be65-4c08-b75f-afb0491f29e0" in namespace "configmap-146" to be "success or failure"
Sep  9 00:52:54.126: INFO: Pod "pod-configmaps-f3a669da-be65-4c08-b75f-afb0491f29e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310513ms
Sep  9 00:52:56.315: INFO: Pod "pod-configmaps-f3a669da-be65-4c08-b75f-afb0491f29e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193250196s
Sep  9 00:52:58.319: INFO: Pod "pod-configmaps-f3a669da-be65-4c08-b75f-afb0491f29e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.197155667s
STEP: Saw pod success
Sep  9 00:52:58.319: INFO: Pod "pod-configmaps-f3a669da-be65-4c08-b75f-afb0491f29e0" satisfied condition "success or failure"
Sep  9 00:52:58.321: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-f3a669da-be65-4c08-b75f-afb0491f29e0 container configmap-volume-test: 
STEP: delete the pod
Sep  9 00:52:58.366: INFO: Waiting for pod pod-configmaps-f3a669da-be65-4c08-b75f-afb0491f29e0 to disappear
Sep  9 00:52:58.387: INFO: Pod pod-configmaps-f3a669da-be65-4c08-b75f-afb0491f29e0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:52:58.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-146" for this suite.
Sep  9 00:53:04.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:53:04.534: INFO: namespace configmap-146 deletion completed in 6.143195649s

• [SLOW TEST:10.490 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:53:04.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-df67116f-e4c9-4d30-85e1-58c3367d1386
STEP: Creating a pod to test consume configMaps
Sep  9 00:53:04.662: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9ff77781-ff9c-45e2-a7e1-4cedb2c27fc0" in namespace "projected-2792" to be "success or failure"
Sep  9 00:53:04.668: INFO: Pod "pod-projected-configmaps-9ff77781-ff9c-45e2-a7e1-4cedb2c27fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.585308ms
Sep  9 00:53:06.710: INFO: Pod "pod-projected-configmaps-9ff77781-ff9c-45e2-a7e1-4cedb2c27fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048531969s
Sep  9 00:53:08.715: INFO: Pod "pod-projected-configmaps-9ff77781-ff9c-45e2-a7e1-4cedb2c27fc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053125879s
STEP: Saw pod success
Sep  9 00:53:08.715: INFO: Pod "pod-projected-configmaps-9ff77781-ff9c-45e2-a7e1-4cedb2c27fc0" satisfied condition "success or failure"
Sep  9 00:53:08.719: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-9ff77781-ff9c-45e2-a7e1-4cedb2c27fc0 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 00:53:08.759: INFO: Waiting for pod pod-projected-configmaps-9ff77781-ff9c-45e2-a7e1-4cedb2c27fc0 to disappear
Sep  9 00:53:08.818: INFO: Pod pod-projected-configmaps-9ff77781-ff9c-45e2-a7e1-4cedb2c27fc0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:53:08.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2792" for this suite.
Sep  9 00:53:14.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:53:14.907: INFO: namespace projected-2792 deletion completed in 6.085832692s

• [SLOW TEST:10.374 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:53:14.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1294.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1294.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep  9 00:53:21.020: INFO: DNS probes using dns-1294/dns-test-f0b4aa9e-c63b-4663-a89a-84f2ff3852fb succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:53:21.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1294" for this suite.
Sep  9 00:53:27.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:53:27.199: INFO: namespace dns-1294 deletion completed in 6.124923105s

• [SLOW TEST:12.291 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:53:27.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Sep  9 00:53:27.246: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix349696547/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:53:27.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3874" for this suite.
Sep  9 00:53:33.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:53:33.407: INFO: namespace kubectl-3874 deletion completed in 6.086696814s

• [SLOW TEST:6.208 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:53:33.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-1682
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1682 to expose endpoints map[]
Sep  9 00:53:33.508: INFO: Get endpoints failed (16.929346ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Sep  9 00:53:34.512: INFO: successfully validated that service endpoint-test2 in namespace services-1682 exposes endpoints map[] (1.021096879s elapsed)
STEP: Creating pod pod1 in namespace services-1682
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1682 to expose endpoints map[pod1:[80]]
Sep  9 00:53:38.573: INFO: successfully validated that service endpoint-test2 in namespace services-1682 exposes endpoints map[pod1:[80]] (4.05390726s elapsed)
STEP: Creating pod pod2 in namespace services-1682
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1682 to expose endpoints map[pod1:[80] pod2:[80]]
Sep  9 00:53:42.745: INFO: successfully validated that service endpoint-test2 in namespace services-1682 exposes endpoints map[pod1:[80] pod2:[80]] (4.167743383s elapsed)
STEP: Deleting pod pod1 in namespace services-1682
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1682 to expose endpoints map[pod2:[80]]
Sep  9 00:53:43.786: INFO: successfully validated that service endpoint-test2 in namespace services-1682 exposes endpoints map[pod2:[80]] (1.036490712s elapsed)
STEP: Deleting pod pod2 in namespace services-1682
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1682 to expose endpoints map[]
Sep  9 00:53:44.812: INFO: successfully validated that service endpoint-test2 in namespace services-1682 exposes endpoints map[] (1.020914549s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:53:44.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1682" for this suite.
Sep  9 00:54:06.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:54:06.938: INFO: namespace services-1682 deletion completed in 22.088822954s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:33.531 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:54:06.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:54:11.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2318" for this suite.
Sep  9 00:54:57.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:54:57.153: INFO: namespace kubelet-test-2318 deletion completed in 46.097453389s

• [SLOW TEST:50.214 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:54:57.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 00:54:57.252: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57ad3919-b0e8-4d6d-b37a-2ea326c5c1db" in namespace "projected-671" to be "success or failure"
Sep  9 00:54:57.270: INFO: Pod "downwardapi-volume-57ad3919-b0e8-4d6d-b37a-2ea326c5c1db": Phase="Pending", Reason="", readiness=false. Elapsed: 17.479164ms
Sep  9 00:54:59.273: INFO: Pod "downwardapi-volume-57ad3919-b0e8-4d6d-b37a-2ea326c5c1db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020864381s
Sep  9 00:55:01.277: INFO: Pod "downwardapi-volume-57ad3919-b0e8-4d6d-b37a-2ea326c5c1db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0242551s
STEP: Saw pod success
Sep  9 00:55:01.277: INFO: Pod "downwardapi-volume-57ad3919-b0e8-4d6d-b37a-2ea326c5c1db" satisfied condition "success or failure"
Sep  9 00:55:01.279: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-57ad3919-b0e8-4d6d-b37a-2ea326c5c1db container client-container: 
STEP: delete the pod
Sep  9 00:55:01.432: INFO: Waiting for pod downwardapi-volume-57ad3919-b0e8-4d6d-b37a-2ea326c5c1db to disappear
Sep  9 00:55:01.475: INFO: Pod downwardapi-volume-57ad3919-b0e8-4d6d-b37a-2ea326c5c1db no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:55:01.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-671" for this suite.
Sep  9 00:55:07.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:55:07.576: INFO: namespace projected-671 deletion completed in 6.095857469s

• [SLOW TEST:10.422 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:55:07.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Sep  9 00:55:07.632: INFO: Waiting up to 5m0s for pod "client-containers-0c991efc-749a-47bb-93ae-df46e2e93549" in namespace "containers-105" to be "success or failure"
Sep  9 00:55:07.649: INFO: Pod "client-containers-0c991efc-749a-47bb-93ae-df46e2e93549": Phase="Pending", Reason="", readiness=false. Elapsed: 16.824337ms
Sep  9 00:55:09.654: INFO: Pod "client-containers-0c991efc-749a-47bb-93ae-df46e2e93549": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021111933s
Sep  9 00:55:11.664: INFO: Pod "client-containers-0c991efc-749a-47bb-93ae-df46e2e93549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031801215s
STEP: Saw pod success
Sep  9 00:55:11.664: INFO: Pod "client-containers-0c991efc-749a-47bb-93ae-df46e2e93549" satisfied condition "success or failure"
Sep  9 00:55:11.667: INFO: Trying to get logs from node iruya-worker pod client-containers-0c991efc-749a-47bb-93ae-df46e2e93549 container test-container: 
STEP: delete the pod
Sep  9 00:55:11.687: INFO: Waiting for pod client-containers-0c991efc-749a-47bb-93ae-df46e2e93549 to disappear
Sep  9 00:55:11.691: INFO: Pod client-containers-0c991efc-749a-47bb-93ae-df46e2e93549 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:55:11.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-105" for this suite.
Sep  9 00:55:17.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:55:17.795: INFO: namespace containers-105 deletion completed in 6.10080908s

• [SLOW TEST:10.219 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:55:17.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep  9 00:55:22.383: INFO: Successfully updated pod "pod-update-activedeadlineseconds-721e4cf8-9025-4597-9512-527ea1f9b45d"
Sep  9 00:55:22.383: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-721e4cf8-9025-4597-9512-527ea1f9b45d" in namespace "pods-3714" to be "terminated due to deadline exceeded"
Sep  9 00:55:22.413: INFO: Pod "pod-update-activedeadlineseconds-721e4cf8-9025-4597-9512-527ea1f9b45d": Phase="Running", Reason="", readiness=true. Elapsed: 29.267914ms
Sep  9 00:55:24.417: INFO: Pod "pod-update-activedeadlineseconds-721e4cf8-9025-4597-9512-527ea1f9b45d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.033166567s
Sep  9 00:55:24.417: INFO: Pod "pod-update-activedeadlineseconds-721e4cf8-9025-4597-9512-527ea1f9b45d" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:55:24.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3714" for this suite.
Sep  9 00:55:30.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:55:30.516: INFO: namespace pods-3714 deletion completed in 6.095283178s

• [SLOW TEST:12.721 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:55:30.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-a9f40346-7d98-47d9-861f-fbf1f91e131d
STEP: Creating a pod to test consume secrets
Sep  9 00:55:30.627: INFO: Waiting up to 5m0s for pod "pod-secrets-b878def4-4708-48cc-9bed-8abfac1c9e95" in namespace "secrets-4876" to be "success or failure"
Sep  9 00:55:30.636: INFO: Pod "pod-secrets-b878def4-4708-48cc-9bed-8abfac1c9e95": Phase="Pending", Reason="", readiness=false. Elapsed: 9.298513ms
Sep  9 00:55:32.640: INFO: Pod "pod-secrets-b878def4-4708-48cc-9bed-8abfac1c9e95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012914091s
Sep  9 00:55:34.644: INFO: Pod "pod-secrets-b878def4-4708-48cc-9bed-8abfac1c9e95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01727025s
STEP: Saw pod success
Sep  9 00:55:34.644: INFO: Pod "pod-secrets-b878def4-4708-48cc-9bed-8abfac1c9e95" satisfied condition "success or failure"
Sep  9 00:55:34.647: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b878def4-4708-48cc-9bed-8abfac1c9e95 container secret-env-test: 
STEP: delete the pod
Sep  9 00:55:34.683: INFO: Waiting for pod pod-secrets-b878def4-4708-48cc-9bed-8abfac1c9e95 to disappear
Sep  9 00:55:34.761: INFO: Pod pod-secrets-b878def4-4708-48cc-9bed-8abfac1c9e95 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:55:34.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4876" for this suite.
Sep  9 00:55:40.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:55:40.855: INFO: namespace secrets-4876 deletion completed in 6.090618708s

• [SLOW TEST:10.338 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:55:40.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep  9 00:55:40.948: INFO: Waiting up to 5m0s for pod "pod-d46d348f-fca6-4544-8973-f83c9dc19152" in namespace "emptydir-865" to be "success or failure"
Sep  9 00:55:40.955: INFO: Pod "pod-d46d348f-fca6-4544-8973-f83c9dc19152": Phase="Pending", Reason="", readiness=false. Elapsed: 7.025692ms
Sep  9 00:55:42.959: INFO: Pod "pod-d46d348f-fca6-4544-8973-f83c9dc19152": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011071289s
Sep  9 00:55:44.963: INFO: Pod "pod-d46d348f-fca6-4544-8973-f83c9dc19152": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014973856s
STEP: Saw pod success
Sep  9 00:55:44.963: INFO: Pod "pod-d46d348f-fca6-4544-8973-f83c9dc19152" satisfied condition "success or failure"
Sep  9 00:55:44.966: INFO: Trying to get logs from node iruya-worker2 pod pod-d46d348f-fca6-4544-8973-f83c9dc19152 container test-container: 
STEP: delete the pod
Sep  9 00:55:44.981: INFO: Waiting for pod pod-d46d348f-fca6-4544-8973-f83c9dc19152 to disappear
Sep  9 00:55:44.991: INFO: Pod pod-d46d348f-fca6-4544-8973-f83c9dc19152 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:55:44.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-865" for this suite.
Sep  9 00:55:51.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:55:51.098: INFO: namespace emptydir-865 deletion completed in 6.103807877s

• [SLOW TEST:10.243 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:55:51.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:56:11.191: INFO: Container started at 2020-09-09 00:55:53 +0000 UTC, pod became ready at 2020-09-09 00:56:09 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:56:11.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1186" for this suite.
Sep  9 00:56:33.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:56:33.284: INFO: namespace container-probe-1186 deletion completed in 22.089057834s

• [SLOW TEST:42.185 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:56:33.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:56:33.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:56:37.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8093" for this suite.
Sep  9 00:57:31.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:57:31.626: INFO: namespace pods-8093 deletion completed in 54.10773108s

• [SLOW TEST:58.342 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:57:31.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep  9 00:57:31.723: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:31.728: INFO: Number of nodes with available pods: 0
Sep  9 00:57:31.728: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:57:32.734: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:32.737: INFO: Number of nodes with available pods: 0
Sep  9 00:57:32.737: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:57:33.733: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:33.737: INFO: Number of nodes with available pods: 0
Sep  9 00:57:33.737: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:57:34.751: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:34.754: INFO: Number of nodes with available pods: 0
Sep  9 00:57:34.754: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:57:35.733: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:35.737: INFO: Number of nodes with available pods: 1
Sep  9 00:57:35.737: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 00:57:36.734: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:36.738: INFO: Number of nodes with available pods: 2
Sep  9 00:57:36.738: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Sep  9 00:57:36.761: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:36.763: INFO: Number of nodes with available pods: 1
Sep  9 00:57:36.763: INFO: Node iruya-worker2 is running more than one daemon pod
Sep  9 00:57:37.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:37.773: INFO: Number of nodes with available pods: 1
Sep  9 00:57:37.773: INFO: Node iruya-worker2 is running more than one daemon pod
Sep  9 00:57:38.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:38.772: INFO: Number of nodes with available pods: 1
Sep  9 00:57:38.772: INFO: Node iruya-worker2 is running more than one daemon pod
Sep  9 00:57:39.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:39.772: INFO: Number of nodes with available pods: 1
Sep  9 00:57:39.772: INFO: Node iruya-worker2 is running more than one daemon pod
Sep  9 00:57:40.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:40.772: INFO: Number of nodes with available pods: 1
Sep  9 00:57:40.772: INFO: Node iruya-worker2 is running more than one daemon pod
Sep  9 00:57:41.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:41.772: INFO: Number of nodes with available pods: 1
Sep  9 00:57:41.772: INFO: Node iruya-worker2 is running more than one daemon pod
Sep  9 00:57:42.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:42.773: INFO: Number of nodes with available pods: 1
Sep  9 00:57:42.773: INFO: Node iruya-worker2 is running more than one daemon pod
Sep  9 00:57:43.775: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:43.779: INFO: Number of nodes with available pods: 1
Sep  9 00:57:43.779: INFO: Node iruya-worker2 is running more than one daemon pod
Sep  9 00:57:44.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:44.772: INFO: Number of nodes with available pods: 1
Sep  9 00:57:44.772: INFO: Node iruya-worker2 is running more than one daemon pod
Sep  9 00:57:45.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:45.771: INFO: Number of nodes with available pods: 1
Sep  9 00:57:45.771: INFO: Node iruya-worker2 is running more than one daemon pod
Sep  9 00:57:46.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 00:57:46.773: INFO: Number of nodes with available pods: 2
Sep  9 00:57:46.773: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5763, will wait for the garbage collector to delete the pods
Sep  9 00:57:46.835: INFO: Deleting DaemonSet.extensions daemon-set took: 5.904753ms
Sep  9 00:57:47.135: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.251841ms
Sep  9 00:57:53.638: INFO: Number of nodes with available pods: 0
Sep  9 00:57:53.638: INFO: Number of running nodes: 0, number of available pods: 0
Sep  9 00:57:53.640: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5763/daemonsets","resourceVersion":"327849"},"items":null}

Sep  9 00:57:53.642: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5763/pods","resourceVersion":"327849"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:57:53.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5763" for this suite.
Sep  9 00:57:59.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:57:59.788: INFO: namespace daemonsets-5763 deletion completed in 6.11484826s

• [SLOW TEST:28.161 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:57:59.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  9 00:57:59.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-801'
Sep  9 00:58:02.547: INFO: stderr: ""
Sep  9 00:58:02.547: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Sep  9 00:58:02.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-801'
Sep  9 00:58:06.470: INFO: stderr: ""
Sep  9 00:58:06.470: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:58:06.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-801" for this suite.
Sep  9 00:58:12.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:58:12.566: INFO: namespace kubectl-801 deletion completed in 6.091666352s

• [SLOW TEST:12.777 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:58:12.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Sep  9 00:58:12.711: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1485,SelfLink:/api/v1/namespaces/watch-1485/configmaps/e2e-watch-test-resource-version,UID:e9d729d8-5509-4c15-9245-ba37a0f9d435,ResourceVersion:327939,Generation:0,CreationTimestamp:2020-09-09 00:58:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep  9 00:58:12.711: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1485,SelfLink:/api/v1/namespaces/watch-1485/configmaps/e2e-watch-test-resource-version,UID:e9d729d8-5509-4c15-9245-ba37a0f9d435,ResourceVersion:327940,Generation:0,CreationTimestamp:2020-09-09 00:58:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:58:12.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1485" for this suite.
Sep  9 00:58:18.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:58:18.816: INFO: namespace watch-1485 deletion completed in 6.090742731s

• [SLOW TEST:6.249 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:58:18.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0909 00:58:59.225283       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep  9 00:58:59.225: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:58:59.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7756" for this suite.
Sep  9 00:59:07.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:59:07.311: INFO: namespace gc-7756 deletion completed in 8.082047443s

• [SLOW TEST:48.494 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:59:07.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Sep  9 00:59:07.923: INFO: Pod name pod-release: Found 0 pods out of 1
Sep  9 00:59:12.927: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:59:13.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1306" for this suite.
Sep  9 00:59:20.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:59:20.125: INFO: namespace replication-controller-1306 deletion completed in 6.164267728s

• [SLOW TEST:12.814 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:59:20.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 00:59:20.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2219'
Sep  9 00:59:20.806: INFO: stderr: ""
Sep  9 00:59:20.806: INFO: stdout: "replicationcontroller/redis-master created\n"
Sep  9 00:59:20.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2219'
Sep  9 00:59:21.113: INFO: stderr: ""
Sep  9 00:59:21.113: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep  9 00:59:22.214: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:59:22.214: INFO: Found 0 / 1
Sep  9 00:59:23.123: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:59:23.123: INFO: Found 0 / 1
Sep  9 00:59:24.122: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:59:24.122: INFO: Found 1 / 1
Sep  9 00:59:24.122: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep  9 00:59:24.125: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 00:59:24.125: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep  9 00:59:24.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-7tdnb --namespace=kubectl-2219'
Sep  9 00:59:24.245: INFO: stderr: ""
Sep  9 00:59:24.245: INFO: stdout: "Name:           redis-master-7tdnb\nNamespace:      kubectl-2219\nPriority:       0\nNode:           iruya-worker2/172.18.0.9\nStart Time:     Wed, 09 Sep 2020 00:59:20 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.1.109\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://06fc382a1fc94691637a7140739740ef6863071fa3a5a092f0ab468e68cca585\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 09 Sep 2020 00:59:23 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qzvkh (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-qzvkh:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qzvkh\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  4s    default-scheduler       Successfully assigned kubectl-2219/redis-master-7tdnb to iruya-worker2\n  Normal  Pulled     2s    kubelet, iruya-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, iruya-worker2  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-worker2  Started container redis-master\n"
Sep  9 00:59:24.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2219'
Sep  9 00:59:24.372: INFO: stderr: ""
Sep  9 00:59:24.372: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2219\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: redis-master-7tdnb\n"
Sep  9 00:59:24.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2219'
Sep  9 00:59:24.472: INFO: stderr: ""
Sep  9 00:59:24.472: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2219\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.101.110.19\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.109:6379\nSession Affinity:  None\nEvents:            \n"
Sep  9 00:59:24.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Sep  9 00:59:24.597: INFO: stderr: ""
Sep  9 00:59:24.597: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 07 Sep 2020 19:16:27 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 09 Sep 2020 00:58:42 +0000   Mon, 07 Sep 2020 19:16:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 09 Sep 2020 00:58:42 +0000   Mon, 07 Sep 2020 19:16:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 09 Sep 2020 00:58:42 +0000   Mon, 07 Sep 2020 19:16:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 09 Sep 2020 00:58:42 +0000   Mon, 07 Sep 2020 19:17:17 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.10\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nSystem Info:\n Machine ID:                 db54631fdbef479e8d44d7c6b9cc607b\n System UUID:                ac9503c4-2df4-4dc0-8bfd-7fa51708cd67\n Boot ID:                    16f80d7c-7741-4040-9735-0d166ad57c21\n Kernel Version:             4.15.0-115-generic\n OS Image:                   Ubuntu 20.04 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version:            v1.15.13-beta.0.1+a34f1e483104bd\n Kube-Proxy Version:         v1.15.13-beta.0.1+a34f1e483104bd\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (8 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-bpjcf                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     29h\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         29h\n  kube-system                kindnet-9f5nt                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      29h\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         29h\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         29h\n  kube-system                kube-proxy-sxzz2                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29h\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         29h\n  local-path-storage         local-path-provisioner-668779bd7-spztm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (4%)   100m (0%)\n  memory             120Mi (0%)  220Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Sep  9 00:59:24.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2219'
Sep  9 00:59:24.695: INFO: stderr: ""
Sep  9 00:59:24.695: INFO: stdout: "Name:         kubectl-2219\nLabels:       e2e-framework=kubectl\n              e2e-run=d5244ffa-5e0a-4101-876b-8e6da8386968\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:59:24.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2219" for this suite.
Sep  9 00:59:46.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:59:46.824: INFO: namespace kubectl-2219 deletion completed in 22.125942071s

• [SLOW TEST:26.699 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:59:46.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 00:59:46.904: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e9759f9-041d-4f46-950e-5454984f65ea" in namespace "projected-3558" to be "success or failure"
Sep  9 00:59:46.912: INFO: Pod "downwardapi-volume-2e9759f9-041d-4f46-950e-5454984f65ea": Phase="Pending", Reason="", readiness=false. Elapsed: 7.736376ms
Sep  9 00:59:48.917: INFO: Pod "downwardapi-volume-2e9759f9-041d-4f46-950e-5454984f65ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012327464s
Sep  9 00:59:50.987: INFO: Pod "downwardapi-volume-2e9759f9-041d-4f46-950e-5454984f65ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082577176s
STEP: Saw pod success
Sep  9 00:59:50.987: INFO: Pod "downwardapi-volume-2e9759f9-041d-4f46-950e-5454984f65ea" satisfied condition "success or failure"
Sep  9 00:59:50.990: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2e9759f9-041d-4f46-950e-5454984f65ea container client-container: 
STEP: delete the pod
Sep  9 00:59:51.070: INFO: Waiting for pod downwardapi-volume-2e9759f9-041d-4f46-950e-5454984f65ea to disappear
Sep  9 00:59:51.184: INFO: Pod downwardapi-volume-2e9759f9-041d-4f46-950e-5454984f65ea no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 00:59:51.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3558" for this suite.
Sep  9 00:59:57.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 00:59:57.279: INFO: namespace projected-3558 deletion completed in 6.091611778s

• [SLOW TEST:10.454 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 00:59:57.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f11e4b1a-26cf-43d8-94a5-d977364600d7
STEP: Creating a pod to test consume secrets
Sep  9 00:59:57.375: INFO: Waiting up to 5m0s for pod "pod-secrets-51d0c7b1-8840-4d73-a63e-743559cf0d9c" in namespace "secrets-1428" to be "success or failure"
Sep  9 00:59:57.379: INFO: Pod "pod-secrets-51d0c7b1-8840-4d73-a63e-743559cf0d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.797558ms
Sep  9 00:59:59.424: INFO: Pod "pod-secrets-51d0c7b1-8840-4d73-a63e-743559cf0d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048576346s
Sep  9 01:00:01.472: INFO: Pod "pod-secrets-51d0c7b1-8840-4d73-a63e-743559cf0d9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09685714s
STEP: Saw pod success
Sep  9 01:00:01.472: INFO: Pod "pod-secrets-51d0c7b1-8840-4d73-a63e-743559cf0d9c" satisfied condition "success or failure"
Sep  9 01:00:01.475: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-51d0c7b1-8840-4d73-a63e-743559cf0d9c container secret-volume-test: 
STEP: delete the pod
Sep  9 01:00:01.524: INFO: Waiting for pod pod-secrets-51d0c7b1-8840-4d73-a63e-743559cf0d9c to disappear
Sep  9 01:00:01.541: INFO: Pod pod-secrets-51d0c7b1-8840-4d73-a63e-743559cf0d9c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:00:01.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1428" for this suite.
Sep  9 01:00:07.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:00:07.624: INFO: namespace secrets-1428 deletion completed in 6.079613935s

• [SLOW TEST:10.345 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:00:07.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep  9 01:00:07.747: INFO: Waiting up to 5m0s for pod "pod-3f763e7b-c68f-46b9-aeb0-01503581622c" in namespace "emptydir-3567" to be "success or failure"
Sep  9 01:00:07.750: INFO: Pod "pod-3f763e7b-c68f-46b9-aeb0-01503581622c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218305ms
Sep  9 01:00:09.843: INFO: Pod "pod-3f763e7b-c68f-46b9-aeb0-01503581622c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096038332s
Sep  9 01:00:11.849: INFO: Pod "pod-3f763e7b-c68f-46b9-aeb0-01503581622c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101727011s
STEP: Saw pod success
Sep  9 01:00:11.849: INFO: Pod "pod-3f763e7b-c68f-46b9-aeb0-01503581622c" satisfied condition "success or failure"
Sep  9 01:00:11.852: INFO: Trying to get logs from node iruya-worker pod pod-3f763e7b-c68f-46b9-aeb0-01503581622c container test-container: 
STEP: delete the pod
Sep  9 01:00:12.037: INFO: Waiting for pod pod-3f763e7b-c68f-46b9-aeb0-01503581622c to disappear
Sep  9 01:00:12.074: INFO: Pod pod-3f763e7b-c68f-46b9-aeb0-01503581622c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:00:12.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3567" for this suite.
Sep  9 01:00:18.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:00:18.171: INFO: namespace emptydir-3567 deletion completed in 6.092941714s

• [SLOW TEST:10.546 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:00:18.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep  9 01:00:26.295: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 01:00:26.316: INFO: Pod pod-with-poststart-http-hook still exists
Sep  9 01:00:28.316: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 01:00:28.320: INFO: Pod pod-with-poststart-http-hook still exists
Sep  9 01:00:30.316: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 01:00:30.320: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:00:30.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8206" for this suite.
Sep  9 01:00:52.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:00:52.419: INFO: namespace container-lifecycle-hook-8206 deletion completed in 22.095134721s

• [SLOW TEST:34.248 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:00:52.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-57da3823-d4db-4a52-9f70-187026f9cfc9
STEP: Creating a pod to test consume secrets
Sep  9 01:00:52.515: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d03c4702-189b-4a0d-9866-2cb69d86e796" in namespace "projected-6105" to be "success or failure"
Sep  9 01:00:52.537: INFO: Pod "pod-projected-secrets-d03c4702-189b-4a0d-9866-2cb69d86e796": Phase="Pending", Reason="", readiness=false. Elapsed: 22.080852ms
Sep  9 01:00:54.541: INFO: Pod "pod-projected-secrets-d03c4702-189b-4a0d-9866-2cb69d86e796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026195036s
Sep  9 01:00:56.545: INFO: Pod "pod-projected-secrets-d03c4702-189b-4a0d-9866-2cb69d86e796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030386772s
STEP: Saw pod success
Sep  9 01:00:56.545: INFO: Pod "pod-projected-secrets-d03c4702-189b-4a0d-9866-2cb69d86e796" satisfied condition "success or failure"
Sep  9 01:00:56.549: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-d03c4702-189b-4a0d-9866-2cb69d86e796 container projected-secret-volume-test: 
STEP: delete the pod
Sep  9 01:00:56.636: INFO: Waiting for pod pod-projected-secrets-d03c4702-189b-4a0d-9866-2cb69d86e796 to disappear
Sep  9 01:00:56.662: INFO: Pod pod-projected-secrets-d03c4702-189b-4a0d-9866-2cb69d86e796 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:00:56.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6105" for this suite.
Sep  9 01:01:02.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:01:02.780: INFO: namespace projected-6105 deletion completed in 6.114554564s

• [SLOW TEST:10.361 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:01:02.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Sep  9 01:01:02.844: INFO: Waiting up to 5m0s for pod "client-containers-2cfae473-f21d-4441-9a7b-5388c3d4a324" in namespace "containers-2930" to be "success or failure"
Sep  9 01:01:02.848: INFO: Pod "client-containers-2cfae473-f21d-4441-9a7b-5388c3d4a324": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108346ms
Sep  9 01:01:04.852: INFO: Pod "client-containers-2cfae473-f21d-4441-9a7b-5388c3d4a324": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008074857s
Sep  9 01:01:06.856: INFO: Pod "client-containers-2cfae473-f21d-4441-9a7b-5388c3d4a324": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011941675s
STEP: Saw pod success
Sep  9 01:01:06.856: INFO: Pod "client-containers-2cfae473-f21d-4441-9a7b-5388c3d4a324" satisfied condition "success or failure"
Sep  9 01:01:06.859: INFO: Trying to get logs from node iruya-worker pod client-containers-2cfae473-f21d-4441-9a7b-5388c3d4a324 container test-container: 
STEP: delete the pod
Sep  9 01:01:06.887: INFO: Waiting for pod client-containers-2cfae473-f21d-4441-9a7b-5388c3d4a324 to disappear
Sep  9 01:01:06.895: INFO: Pod client-containers-2cfae473-f21d-4441-9a7b-5388c3d4a324 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:01:06.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2930" for this suite.
Sep  9 01:01:12.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:01:12.986: INFO: namespace containers-2930 deletion completed in 6.087641777s

• [SLOW TEST:10.205 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:01:12.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-8ee07e0a-4455-48ce-93fc-fa88de7e1db6
STEP: Creating a pod to test consume secrets
Sep  9 01:01:13.053: INFO: Waiting up to 5m0s for pod "pod-secrets-2abb0fe3-d2fa-45d7-a159-ebbcc06b625e" in namespace "secrets-5370" to be "success or failure"
Sep  9 01:01:13.057: INFO: Pod "pod-secrets-2abb0fe3-d2fa-45d7-a159-ebbcc06b625e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.643465ms
Sep  9 01:01:15.061: INFO: Pod "pod-secrets-2abb0fe3-d2fa-45d7-a159-ebbcc06b625e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007777927s
Sep  9 01:01:17.065: INFO: Pod "pod-secrets-2abb0fe3-d2fa-45d7-a159-ebbcc06b625e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011955874s
STEP: Saw pod success
Sep  9 01:01:17.065: INFO: Pod "pod-secrets-2abb0fe3-d2fa-45d7-a159-ebbcc06b625e" satisfied condition "success or failure"
Sep  9 01:01:17.068: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-2abb0fe3-d2fa-45d7-a159-ebbcc06b625e container secret-volume-test: 
STEP: delete the pod
Sep  9 01:01:17.127: INFO: Waiting for pod pod-secrets-2abb0fe3-d2fa-45d7-a159-ebbcc06b625e to disappear
Sep  9 01:01:17.136: INFO: Pod pod-secrets-2abb0fe3-d2fa-45d7-a159-ebbcc06b625e no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:01:17.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5370" for this suite.
Sep  9 01:01:23.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:01:23.235: INFO: namespace secrets-5370 deletion completed in 6.092732451s

• [SLOW TEST:10.248 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:01:23.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Sep  9 01:01:23.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8677'
Sep  9 01:01:23.615: INFO: stderr: ""
Sep  9 01:01:23.615: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  9 01:01:23.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8677'
Sep  9 01:01:23.776: INFO: stderr: ""
Sep  9 01:01:23.777: INFO: stdout: "update-demo-nautilus-djrmf update-demo-nautilus-q6s5w "
Sep  9 01:01:23.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djrmf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8677'
Sep  9 01:01:23.861: INFO: stderr: ""
Sep  9 01:01:23.861: INFO: stdout: ""
Sep  9 01:01:23.861: INFO: update-demo-nautilus-djrmf is created but not running
Sep  9 01:01:28.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8677'
Sep  9 01:01:28.960: INFO: stderr: ""
Sep  9 01:01:28.960: INFO: stdout: "update-demo-nautilus-djrmf update-demo-nautilus-q6s5w "
Sep  9 01:01:28.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djrmf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8677'
Sep  9 01:01:29.057: INFO: stderr: ""
Sep  9 01:01:29.057: INFO: stdout: "true"
Sep  9 01:01:29.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djrmf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8677'
Sep  9 01:01:29.158: INFO: stderr: ""
Sep  9 01:01:29.158: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 01:01:29.158: INFO: validating pod update-demo-nautilus-djrmf
Sep  9 01:01:29.162: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 01:01:29.162: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 01:01:29.162: INFO: update-demo-nautilus-djrmf is verified up and running
Sep  9 01:01:29.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q6s5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8677'
Sep  9 01:01:29.251: INFO: stderr: ""
Sep  9 01:01:29.251: INFO: stdout: "true"
Sep  9 01:01:29.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q6s5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8677'
Sep  9 01:01:29.341: INFO: stderr: ""
Sep  9 01:01:29.341: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 01:01:29.341: INFO: validating pod update-demo-nautilus-q6s5w
Sep  9 01:01:29.345: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 01:01:29.345: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 01:01:29.345: INFO: update-demo-nautilus-q6s5w is verified up and running
STEP: rolling-update to new replication controller
Sep  9 01:01:29.347: INFO: scanned /root for discovery docs: 
Sep  9 01:01:29.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8677'
Sep  9 01:01:51.932: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Sep  9 01:01:51.932: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  9 01:01:51.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8677'
Sep  9 01:01:52.019: INFO: stderr: ""
Sep  9 01:01:52.019: INFO: stdout: "update-demo-kitten-645s7 update-demo-kitten-k22b9 update-demo-nautilus-q6s5w "
STEP: Replicas for name=update-demo: expected=2 actual=3
Sep  9 01:01:57.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8677'
Sep  9 01:01:57.135: INFO: stderr: ""
Sep  9 01:01:57.136: INFO: stdout: "update-demo-kitten-645s7 update-demo-kitten-k22b9 "
Sep  9 01:01:57.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-645s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8677'
Sep  9 01:01:57.246: INFO: stderr: ""
Sep  9 01:01:57.246: INFO: stdout: "true"
Sep  9 01:01:57.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-645s7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8677'
Sep  9 01:01:57.338: INFO: stderr: ""
Sep  9 01:01:57.338: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Sep  9 01:01:57.338: INFO: validating pod update-demo-kitten-645s7
Sep  9 01:01:57.342: INFO: got data: {
  "image": "kitten.jpg"
}

Sep  9 01:01:57.342: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Sep  9 01:01:57.342: INFO: update-demo-kitten-645s7 is verified up and running
Sep  9 01:01:57.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-k22b9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8677'
Sep  9 01:01:57.438: INFO: stderr: ""
Sep  9 01:01:57.438: INFO: stdout: "true"
Sep  9 01:01:57.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-k22b9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8677'
Sep  9 01:01:57.539: INFO: stderr: ""
Sep  9 01:01:57.539: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Sep  9 01:01:57.539: INFO: validating pod update-demo-kitten-k22b9
Sep  9 01:01:57.583: INFO: got data: {
  "image": "kitten.jpg"
}

Sep  9 01:01:57.583: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Sep  9 01:01:57.583: INFO: update-demo-kitten-k22b9 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:01:57.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8677" for this suite.
Sep  9 01:02:21.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:02:21.718: INFO: namespace kubectl-8677 deletion completed in 24.125864343s

• [SLOW TEST:58.483 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:02:21.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Sep  9 01:02:26.335: INFO: Successfully updated pod "labelsupdate88a72f78-c5c6-46c4-bc16-b64a23d72946"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:02:28.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8999" for this suite.
Sep  9 01:02:50.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:02:50.439: INFO: namespace downward-api-8999 deletion completed in 22.083174835s

• [SLOW TEST:28.721 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:02:50.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-5db0998c-4bf9-47b6-9ba6-52d99ac1bf53 in namespace container-probe-6846
Sep  9 01:02:54.634: INFO: Started pod busybox-5db0998c-4bf9-47b6-9ba6-52d99ac1bf53 in namespace container-probe-6846
STEP: checking the pod's current state and verifying that restartCount is present
Sep  9 01:02:54.636: INFO: Initial restart count of pod busybox-5db0998c-4bf9-47b6-9ba6-52d99ac1bf53 is 0
Sep  9 01:03:46.747: INFO: Restart count of pod container-probe-6846/busybox-5db0998c-4bf9-47b6-9ba6-52d99ac1bf53 is now 1 (52.110622454s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:03:46.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6846" for this suite.
Sep  9 01:03:52.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:03:52.915: INFO: namespace container-probe-6846 deletion completed in 6.102845079s

• [SLOW TEST:62.475 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:03:52.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-b191c25d-a442-4d58-a6b5-3d1a1e32160c
STEP: Creating a pod to test consume configMaps
Sep  9 01:03:53.035: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fcf49e8d-0f92-4c0c-8b8a-9b4a984e41fe" in namespace "projected-8577" to be "success or failure"
Sep  9 01:03:53.050: INFO: Pod "pod-projected-configmaps-fcf49e8d-0f92-4c0c-8b8a-9b4a984e41fe": Phase="Pending", Reason="", readiness=false. Elapsed: 15.063763ms
Sep  9 01:03:55.081: INFO: Pod "pod-projected-configmaps-fcf49e8d-0f92-4c0c-8b8a-9b4a984e41fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045973931s
Sep  9 01:03:57.084: INFO: Pod "pod-projected-configmaps-fcf49e8d-0f92-4c0c-8b8a-9b4a984e41fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049468833s
STEP: Saw pod success
Sep  9 01:03:57.084: INFO: Pod "pod-projected-configmaps-fcf49e8d-0f92-4c0c-8b8a-9b4a984e41fe" satisfied condition "success or failure"
Sep  9 01:03:57.086: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-fcf49e8d-0f92-4c0c-8b8a-9b4a984e41fe container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 01:03:57.111: INFO: Waiting for pod pod-projected-configmaps-fcf49e8d-0f92-4c0c-8b8a-9b4a984e41fe to disappear
Sep  9 01:03:57.115: INFO: Pod pod-projected-configmaps-fcf49e8d-0f92-4c0c-8b8a-9b4a984e41fe no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:03:57.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8577" for this suite.
Sep  9 01:04:03.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:04:03.226: INFO: namespace projected-8577 deletion completed in 6.108044127s

• [SLOW TEST:10.311 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:04:03.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Sep  9 01:04:03.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5209'
Sep  9 01:04:03.592: INFO: stderr: ""
Sep  9 01:04:03.592: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  9 01:04:03.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5209'
Sep  9 01:04:03.703: INFO: stderr: ""
Sep  9 01:04:03.703: INFO: stdout: "update-demo-nautilus-ckwr2 update-demo-nautilus-t7kzl "
Sep  9 01:04:03.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ckwr2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5209'
Sep  9 01:04:03.791: INFO: stderr: ""
Sep  9 01:04:03.791: INFO: stdout: ""
Sep  9 01:04:03.791: INFO: update-demo-nautilus-ckwr2 is created but not running
Sep  9 01:04:08.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5209'
Sep  9 01:04:08.896: INFO: stderr: ""
Sep  9 01:04:08.896: INFO: stdout: "update-demo-nautilus-ckwr2 update-demo-nautilus-t7kzl "
Sep  9 01:04:08.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ckwr2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5209'
Sep  9 01:04:08.990: INFO: stderr: ""
Sep  9 01:04:08.990: INFO: stdout: "true"
Sep  9 01:04:08.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ckwr2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5209'
Sep  9 01:04:09.128: INFO: stderr: ""
Sep  9 01:04:09.129: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 01:04:09.129: INFO: validating pod update-demo-nautilus-ckwr2
Sep  9 01:04:09.133: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 01:04:09.133: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 01:04:09.133: INFO: update-demo-nautilus-ckwr2 is verified up and running
Sep  9 01:04:09.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t7kzl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5209'
Sep  9 01:04:09.229: INFO: stderr: ""
Sep  9 01:04:09.229: INFO: stdout: "true"
Sep  9 01:04:09.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t7kzl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5209'
Sep  9 01:04:09.319: INFO: stderr: ""
Sep  9 01:04:09.319: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 01:04:09.319: INFO: validating pod update-demo-nautilus-t7kzl
Sep  9 01:04:09.323: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 01:04:09.324: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 01:04:09.324: INFO: update-demo-nautilus-t7kzl is verified up and running
STEP: using delete to clean up resources
Sep  9 01:04:09.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5209'
Sep  9 01:04:09.479: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 01:04:09.479: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Sep  9 01:04:09.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5209'
Sep  9 01:04:09.577: INFO: stderr: "No resources found.\n"
Sep  9 01:04:09.577: INFO: stdout: ""
Sep  9 01:04:09.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5209 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep  9 01:04:09.671: INFO: stderr: ""
Sep  9 01:04:09.671: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:04:09.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5209" for this suite.
Sep  9 01:04:31.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:04:31.767: INFO: namespace kubectl-5209 deletion completed in 22.093154016s

• [SLOW TEST:28.540 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:04:31.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Sep  9 01:04:39.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:39.913: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:04:41.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:41.917: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:04:43.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:43.918: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:04:45.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:45.918: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:04:47.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:47.918: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:04:49.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:49.918: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:04:51.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:51.918: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:04:53.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:53.917: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:04:55.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:55.918: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:04:57.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:57.918: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:04:59.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:04:59.918: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:05:01.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:05:01.917: INFO: Pod pod-with-prestop-exec-hook still exists
Sep  9 01:05:03.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep  9 01:05:03.918: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:05:03.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5706" for this suite.
Sep  9 01:05:25.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:05:26.032: INFO: namespace container-lifecycle-hook-5706 deletion completed in 22.094221259s

• [SLOW TEST:54.264 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:05:26.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 01:05:26.148: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Sep  9 01:05:26.170: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:26.181: INFO: Number of nodes with available pods: 0
Sep  9 01:05:26.181: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 01:05:27.187: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:27.191: INFO: Number of nodes with available pods: 0
Sep  9 01:05:27.191: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 01:05:28.187: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:28.189: INFO: Number of nodes with available pods: 0
Sep  9 01:05:28.189: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 01:05:29.251: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:29.255: INFO: Number of nodes with available pods: 0
Sep  9 01:05:29.255: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 01:05:30.187: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:30.191: INFO: Number of nodes with available pods: 0
Sep  9 01:05:30.191: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 01:05:31.187: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:31.190: INFO: Number of nodes with available pods: 2
Sep  9 01:05:31.190: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Sep  9 01:05:31.274: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:31.274: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:31.296: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:32.300: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:32.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:32.305: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:33.300: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:33.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:33.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:34.300: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:34.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:34.305: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:35.301: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:35.301: INFO: Pod daemon-set-c6x98 is not available
Sep  9 01:05:35.301: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:35.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:36.300: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:36.300: INFO: Pod daemon-set-c6x98 is not available
Sep  9 01:05:36.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:36.305: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:37.300: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:37.300: INFO: Pod daemon-set-c6x98 is not available
Sep  9 01:05:37.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:37.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:38.300: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:38.300: INFO: Pod daemon-set-c6x98 is not available
Sep  9 01:05:38.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:38.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:39.301: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:39.301: INFO: Pod daemon-set-c6x98 is not available
Sep  9 01:05:39.301: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:39.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:40.300: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:40.300: INFO: Pod daemon-set-c6x98 is not available
Sep  9 01:05:40.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:40.303: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:41.300: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:41.300: INFO: Pod daemon-set-c6x98 is not available
Sep  9 01:05:41.301: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:41.305: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:42.300: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:42.300: INFO: Pod daemon-set-c6x98 is not available
Sep  9 01:05:42.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:42.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:43.300: INFO: Wrong image for pod: daemon-set-c6x98. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:43.300: INFO: Pod daemon-set-c6x98 is not available
Sep  9 01:05:43.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:43.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:44.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:44.300: INFO: Pod daemon-set-j9b5q is not available
Sep  9 01:05:44.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:45.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:45.300: INFO: Pod daemon-set-j9b5q is not available
Sep  9 01:05:45.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:46.301: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:46.301: INFO: Pod daemon-set-j9b5q is not available
Sep  9 01:05:46.306: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:47.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:47.305: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:48.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:48.300: INFO: Pod daemon-set-gsvgt is not available
Sep  9 01:05:48.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:49.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:49.300: INFO: Pod daemon-set-gsvgt is not available
Sep  9 01:05:49.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:50.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:50.300: INFO: Pod daemon-set-gsvgt is not available
Sep  9 01:05:50.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:51.300: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:51.300: INFO: Pod daemon-set-gsvgt is not available
Sep  9 01:05:51.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:52.301: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:52.301: INFO: Pod daemon-set-gsvgt is not available
Sep  9 01:05:52.305: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:53.299: INFO: Wrong image for pod: daemon-set-gsvgt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 01:05:53.299: INFO: Pod daemon-set-gsvgt is not available
Sep  9 01:05:53.302: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:54.300: INFO: Pod daemon-set-5kmr8 is not available
Sep  9 01:05:54.305: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Sep  9 01:05:54.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:54.313: INFO: Number of nodes with available pods: 1
Sep  9 01:05:54.313: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 01:05:55.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:55.321: INFO: Number of nodes with available pods: 1
Sep  9 01:05:55.321: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 01:05:56.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:56.322: INFO: Number of nodes with available pods: 1
Sep  9 01:05:56.322: INFO: Node iruya-worker is running more than one daemon pod
Sep  9 01:05:57.317: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 01:05:57.319: INFO: Number of nodes with available pods: 2
Sep  9 01:05:57.319: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5777, will wait for the garbage collector to delete the pods
Sep  9 01:05:57.392: INFO: Deleting DaemonSet.extensions daemon-set took: 6.889074ms
Sep  9 01:05:57.692: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.246182ms
Sep  9 01:06:03.696: INFO: Number of nodes with available pods: 0
Sep  9 01:06:03.696: INFO: Number of running nodes: 0, number of available pods: 0
Sep  9 01:06:03.698: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5777/daemonsets","resourceVersion":"329707"},"items":null}

Sep  9 01:06:03.701: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5777/pods","resourceVersion":"329707"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:06:03.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5777" for this suite.
Sep  9 01:06:09.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:06:09.901: INFO: namespace daemonsets-5777 deletion completed in 6.186615045s

• [SLOW TEST:43.869 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:06:09.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep  9 01:06:10.031: INFO: Waiting up to 5m0s for pod "pod-548fbe4a-187c-426a-9b28-0ce8803f6c84" in namespace "emptydir-7343" to be "success or failure"
Sep  9 01:06:10.035: INFO: Pod "pod-548fbe4a-187c-426a-9b28-0ce8803f6c84": Phase="Pending", Reason="", readiness=false. Elapsed: 3.6147ms
Sep  9 01:06:12.071: INFO: Pod "pod-548fbe4a-187c-426a-9b28-0ce8803f6c84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039661517s
Sep  9 01:06:14.075: INFO: Pod "pod-548fbe4a-187c-426a-9b28-0ce8803f6c84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044047056s
STEP: Saw pod success
Sep  9 01:06:14.076: INFO: Pod "pod-548fbe4a-187c-426a-9b28-0ce8803f6c84" satisfied condition "success or failure"
Sep  9 01:06:14.078: INFO: Trying to get logs from node iruya-worker pod pod-548fbe4a-187c-426a-9b28-0ce8803f6c84 container test-container: 
STEP: delete the pod
Sep  9 01:06:14.102: INFO: Waiting for pod pod-548fbe4a-187c-426a-9b28-0ce8803f6c84 to disappear
Sep  9 01:06:14.108: INFO: Pod pod-548fbe4a-187c-426a-9b28-0ce8803f6c84 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:06:14.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7343" for this suite.
Sep  9 01:06:20.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:06:20.239: INFO: namespace emptydir-7343 deletion completed in 6.125000026s

• [SLOW TEST:10.338 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:06:20.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:06:20.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6028" for this suite.
Sep  9 01:06:42.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:06:42.538: INFO: namespace pods-6028 deletion completed in 22.194567938s

• [SLOW TEST:22.299 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:06:42.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 01:06:42.605: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3a3870c-ddaf-4f09-afc8-0e8b5ac9a878" in namespace "projected-3046" to be "success or failure"
Sep  9 01:06:42.614: INFO: Pod "downwardapi-volume-b3a3870c-ddaf-4f09-afc8-0e8b5ac9a878": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184097ms
Sep  9 01:06:44.796: INFO: Pod "downwardapi-volume-b3a3870c-ddaf-4f09-afc8-0e8b5ac9a878": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190740314s
Sep  9 01:06:46.801: INFO: Pod "downwardapi-volume-b3a3870c-ddaf-4f09-afc8-0e8b5ac9a878": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.195241806s
STEP: Saw pod success
Sep  9 01:06:46.801: INFO: Pod "downwardapi-volume-b3a3870c-ddaf-4f09-afc8-0e8b5ac9a878" satisfied condition "success or failure"
Sep  9 01:06:46.804: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b3a3870c-ddaf-4f09-afc8-0e8b5ac9a878 container client-container: 
STEP: delete the pod
Sep  9 01:06:46.825: INFO: Waiting for pod downwardapi-volume-b3a3870c-ddaf-4f09-afc8-0e8b5ac9a878 to disappear
Sep  9 01:06:46.829: INFO: Pod downwardapi-volume-b3a3870c-ddaf-4f09-afc8-0e8b5ac9a878 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:06:46.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3046" for this suite.
Sep  9 01:06:52.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:06:52.922: INFO: namespace projected-3046 deletion completed in 6.087125595s

• [SLOW TEST:10.383 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:06:52.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:06:52.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8073" for this suite.
Sep  9 01:06:59.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:06:59.093: INFO: namespace services-8073 deletion completed in 6.095764765s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.171 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:06:59.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 01:06:59.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b109fc71-b25e-4942-a3b2-a7055f8629b9" in namespace "projected-9012" to be "success or failure"
Sep  9 01:06:59.157: INFO: Pod "downwardapi-volume-b109fc71-b25e-4942-a3b2-a7055f8629b9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.947267ms
Sep  9 01:07:01.161: INFO: Pod "downwardapi-volume-b109fc71-b25e-4942-a3b2-a7055f8629b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015251196s
Sep  9 01:07:03.166: INFO: Pod "downwardapi-volume-b109fc71-b25e-4942-a3b2-a7055f8629b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019608498s
STEP: Saw pod success
Sep  9 01:07:03.166: INFO: Pod "downwardapi-volume-b109fc71-b25e-4942-a3b2-a7055f8629b9" satisfied condition "success or failure"
Sep  9 01:07:03.169: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b109fc71-b25e-4942-a3b2-a7055f8629b9 container client-container: 
STEP: delete the pod
Sep  9 01:07:03.209: INFO: Waiting for pod downwardapi-volume-b109fc71-b25e-4942-a3b2-a7055f8629b9 to disappear
Sep  9 01:07:03.234: INFO: Pod downwardapi-volume-b109fc71-b25e-4942-a3b2-a7055f8629b9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:07:03.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9012" for this suite.
Sep  9 01:07:09.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:07:09.331: INFO: namespace projected-9012 deletion completed in 6.093405297s

• [SLOW TEST:10.237 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:07:09.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-5f134b34-a84c-42b4-afc9-413363b4a42f
STEP: Creating a pod to test consume configMaps
Sep  9 01:07:09.404: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a4174b8-9deb-48cc-baf2-28455756e646" in namespace "projected-4433" to be "success or failure"
Sep  9 01:07:09.425: INFO: Pod "pod-projected-configmaps-4a4174b8-9deb-48cc-baf2-28455756e646": Phase="Pending", Reason="", readiness=false. Elapsed: 21.28915ms
Sep  9 01:07:11.430: INFO: Pod "pod-projected-configmaps-4a4174b8-9deb-48cc-baf2-28455756e646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026445718s
Sep  9 01:07:13.434: INFO: Pod "pod-projected-configmaps-4a4174b8-9deb-48cc-baf2-28455756e646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030459107s
STEP: Saw pod success
Sep  9 01:07:13.434: INFO: Pod "pod-projected-configmaps-4a4174b8-9deb-48cc-baf2-28455756e646" satisfied condition "success or failure"
Sep  9 01:07:13.438: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-4a4174b8-9deb-48cc-baf2-28455756e646 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 01:07:13.470: INFO: Waiting for pod pod-projected-configmaps-4a4174b8-9deb-48cc-baf2-28455756e646 to disappear
Sep  9 01:07:13.476: INFO: Pod pod-projected-configmaps-4a4174b8-9deb-48cc-baf2-28455756e646 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:07:13.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4433" for this suite.
Sep  9 01:07:19.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:07:19.600: INFO: namespace projected-4433 deletion completed in 6.119882608s

• [SLOW TEST:10.269 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:07:19.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Sep  9 01:07:19.674: INFO: Waiting up to 5m0s for pod "var-expansion-dc63bbf8-94fd-473d-bc60-5520d64fbb1f" in namespace "var-expansion-3693" to be "success or failure"
Sep  9 01:07:19.710: INFO: Pod "var-expansion-dc63bbf8-94fd-473d-bc60-5520d64fbb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.575574ms
Sep  9 01:07:21.714: INFO: Pod "var-expansion-dc63bbf8-94fd-473d-bc60-5520d64fbb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040348728s
Sep  9 01:07:23.749: INFO: Pod "var-expansion-dc63bbf8-94fd-473d-bc60-5520d64fbb1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075493541s
STEP: Saw pod success
Sep  9 01:07:23.749: INFO: Pod "var-expansion-dc63bbf8-94fd-473d-bc60-5520d64fbb1f" satisfied condition "success or failure"
Sep  9 01:07:23.753: INFO: Trying to get logs from node iruya-worker pod var-expansion-dc63bbf8-94fd-473d-bc60-5520d64fbb1f container dapi-container: 
STEP: delete the pod
Sep  9 01:07:23.810: INFO: Waiting for pod var-expansion-dc63bbf8-94fd-473d-bc60-5520d64fbb1f to disappear
Sep  9 01:07:23.904: INFO: Pod var-expansion-dc63bbf8-94fd-473d-bc60-5520d64fbb1f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:07:23.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3693" for this suite.
Sep  9 01:07:29.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:07:29.998: INFO: namespace var-expansion-3693 deletion completed in 6.089502478s

• [SLOW TEST:10.398 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:07:29.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-8870
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8870 to expose endpoints map[]
Sep  9 01:07:30.203: INFO: Get endpoints failed (44.858765ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Sep  9 01:07:31.210: INFO: successfully validated that service multi-endpoint-test in namespace services-8870 exposes endpoints map[] (1.050983124s elapsed)
STEP: Creating pod pod1 in namespace services-8870
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8870 to expose endpoints map[pod1:[100]]
Sep  9 01:07:34.335: INFO: successfully validated that service multi-endpoint-test in namespace services-8870 exposes endpoints map[pod1:[100]] (3.118874699s elapsed)
STEP: Creating pod pod2 in namespace services-8870
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8870 to expose endpoints map[pod1:[100] pod2:[101]]
Sep  9 01:07:38.456: INFO: successfully validated that service multi-endpoint-test in namespace services-8870 exposes endpoints map[pod1:[100] pod2:[101]] (4.116077307s elapsed)
STEP: Deleting pod pod1 in namespace services-8870
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8870 to expose endpoints map[pod2:[101]]
Sep  9 01:07:39.478: INFO: successfully validated that service multi-endpoint-test in namespace services-8870 exposes endpoints map[pod2:[101]] (1.017801677s elapsed)
STEP: Deleting pod pod2 in namespace services-8870
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8870 to expose endpoints map[]
Sep  9 01:07:40.509: INFO: successfully validated that service multi-endpoint-test in namespace services-8870 exposes endpoints map[] (1.0255861s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:07:40.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8870" for this suite.
Sep  9 01:07:46.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:07:46.668: INFO: namespace services-8870 deletion completed in 6.089826598s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:16.670 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:07:46.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Sep  9 01:07:46.758: INFO: Waiting up to 5m0s for pod "downward-api-31c7384c-ec94-4fdc-b02a-bfbb1f1695be" in namespace "downward-api-8620" to be "success or failure"
Sep  9 01:07:46.776: INFO: Pod "downward-api-31c7384c-ec94-4fdc-b02a-bfbb1f1695be": Phase="Pending", Reason="", readiness=false. Elapsed: 17.56645ms
Sep  9 01:07:48.780: INFO: Pod "downward-api-31c7384c-ec94-4fdc-b02a-bfbb1f1695be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021734249s
Sep  9 01:07:50.785: INFO: Pod "downward-api-31c7384c-ec94-4fdc-b02a-bfbb1f1695be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02636441s
STEP: Saw pod success
Sep  9 01:07:50.785: INFO: Pod "downward-api-31c7384c-ec94-4fdc-b02a-bfbb1f1695be" satisfied condition "success or failure"
Sep  9 01:07:50.788: INFO: Trying to get logs from node iruya-worker2 pod downward-api-31c7384c-ec94-4fdc-b02a-bfbb1f1695be container dapi-container: 
STEP: delete the pod
Sep  9 01:07:50.822: INFO: Waiting for pod downward-api-31c7384c-ec94-4fdc-b02a-bfbb1f1695be to disappear
Sep  9 01:07:50.828: INFO: Pod downward-api-31c7384c-ec94-4fdc-b02a-bfbb1f1695be no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:07:50.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8620" for this suite.
Sep  9 01:07:56.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:07:56.958: INFO: namespace downward-api-8620 deletion completed in 6.127145287s

• [SLOW TEST:10.290 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:07:56.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Sep  9 01:07:57.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Sep  9 01:07:57.176: INFO: stderr: ""
Sep  9 01:07:57.176: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:41589\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:41589/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:07:57.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6840" for this suite.
Sep  9 01:08:03.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:08:03.311: INFO: namespace kubectl-6840 deletion completed in 6.130945668s

• [SLOW TEST:6.353 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:08:03.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep  9 01:08:03.359: INFO: Waiting up to 5m0s for pod "pod-ffb4d12e-6cba-4758-9616-8811dd0d313d" in namespace "emptydir-6788" to be "success or failure"
Sep  9 01:08:03.386: INFO: Pod "pod-ffb4d12e-6cba-4758-9616-8811dd0d313d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.490812ms
Sep  9 01:08:05.420: INFO: Pod "pod-ffb4d12e-6cba-4758-9616-8811dd0d313d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060581365s
Sep  9 01:08:07.424: INFO: Pod "pod-ffb4d12e-6cba-4758-9616-8811dd0d313d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064578838s
STEP: Saw pod success
Sep  9 01:08:07.424: INFO: Pod "pod-ffb4d12e-6cba-4758-9616-8811dd0d313d" satisfied condition "success or failure"
Sep  9 01:08:07.427: INFO: Trying to get logs from node iruya-worker2 pod pod-ffb4d12e-6cba-4758-9616-8811dd0d313d container test-container: 
STEP: delete the pod
Sep  9 01:08:07.493: INFO: Waiting for pod pod-ffb4d12e-6cba-4758-9616-8811dd0d313d to disappear
Sep  9 01:08:07.587: INFO: Pod pod-ffb4d12e-6cba-4758-9616-8811dd0d313d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:08:07.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6788" for this suite.
Sep  9 01:08:13.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:08:13.705: INFO: namespace emptydir-6788 deletion completed in 6.11264195s

• [SLOW TEST:10.392 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:08:13.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Sep  9 01:08:17.844: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Sep  9 01:08:27.939: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:08:27.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-248" for this suite.
Sep  9 01:08:33.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:08:34.036: INFO: namespace pods-248 deletion completed in 6.09080826s

• [SLOW TEST:20.331 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:08:34.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Sep  9 01:08:40.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-8222b69d-9614-4cf0-96e9-c074d4622925 -c busybox-main-container --namespace=emptydir-6183 -- cat /usr/share/volumeshare/shareddata.txt'
Sep  9 01:08:42.878: INFO: stderr: "I0909 01:08:42.801880    3513 log.go:172] (0xc000860420) (0xc000b6a960) Create stream\nI0909 01:08:42.801906    3513 log.go:172] (0xc000860420) (0xc000b6a960) Stream added, broadcasting: 1\nI0909 01:08:42.804197    3513 log.go:172] (0xc000860420) Reply frame received for 1\nI0909 01:08:42.804241    3513 log.go:172] (0xc000860420) (0xc000634140) Create stream\nI0909 01:08:42.804259    3513 log.go:172] (0xc000860420) (0xc000634140) Stream added, broadcasting: 3\nI0909 01:08:42.805281    3513 log.go:172] (0xc000860420) Reply frame received for 3\nI0909 01:08:42.805344    3513 log.go:172] (0xc000860420) (0xc000604000) Create stream\nI0909 01:08:42.805362    3513 log.go:172] (0xc000860420) (0xc000604000) Stream added, broadcasting: 5\nI0909 01:08:42.806301    3513 log.go:172] (0xc000860420) Reply frame received for 5\nI0909 01:08:42.872242    3513 log.go:172] (0xc000860420) Data frame received for 3\nI0909 01:08:42.872287    3513 log.go:172] (0xc000634140) (3) Data frame handling\nI0909 01:08:42.872303    3513 log.go:172] (0xc000634140) (3) Data frame sent\nI0909 01:08:42.872312    3513 log.go:172] (0xc000860420) Data frame received for 3\nI0909 01:08:42.872319    3513 log.go:172] (0xc000634140) (3) Data frame handling\nI0909 01:08:42.872354    3513 log.go:172] (0xc000860420) Data frame received for 5\nI0909 01:08:42.872365    3513 log.go:172] (0xc000604000) (5) Data frame handling\nI0909 01:08:42.873931    3513 log.go:172] (0xc000860420) Data frame received for 1\nI0909 01:08:42.873943    3513 log.go:172] (0xc000b6a960) (1) Data frame handling\nI0909 01:08:42.873950    3513 log.go:172] (0xc000b6a960) (1) Data frame sent\nI0909 01:08:42.873957    3513 log.go:172] (0xc000860420) (0xc000b6a960) Stream removed, broadcasting: 1\nI0909 01:08:42.874518    3513 log.go:172] (0xc000860420) Go away received\nI0909 01:08:42.874980    3513 log.go:172] (0xc000860420) (0xc000b6a960) Stream removed, broadcasting: 1\nI0909 01:08:42.875010    3513 log.go:172] (0xc000860420) (0xc000634140) Stream removed, broadcasting: 3\nI0909 01:08:42.875026    3513 log.go:172] (0xc000860420) (0xc000604000) Stream removed, broadcasting: 5\n"
Sep  9 01:08:42.878: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:08:42.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6183" for this suite.
Sep  9 01:08:48.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:08:48.967: INFO: namespace emptydir-6183 deletion completed in 6.084543004s

• [SLOW TEST:14.931 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:08:48.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep  9 01:08:49.038: INFO: Waiting up to 5m0s for pod "pod-9d0d4487-5a09-44a2-8a26-bac55a5609fe" in namespace "emptydir-7530" to be "success or failure"
Sep  9 01:08:49.041: INFO: Pod "pod-9d0d4487-5a09-44a2-8a26-bac55a5609fe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.05059ms
Sep  9 01:08:51.260: INFO: Pod "pod-9d0d4487-5a09-44a2-8a26-bac55a5609fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222258764s
Sep  9 01:08:53.270: INFO: Pod "pod-9d0d4487-5a09-44a2-8a26-bac55a5609fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.232597134s
STEP: Saw pod success
Sep  9 01:08:53.270: INFO: Pod "pod-9d0d4487-5a09-44a2-8a26-bac55a5609fe" satisfied condition "success or failure"
Sep  9 01:08:53.272: INFO: Trying to get logs from node iruya-worker2 pod pod-9d0d4487-5a09-44a2-8a26-bac55a5609fe container test-container: 
STEP: delete the pod
Sep  9 01:08:53.288: INFO: Waiting for pod pod-9d0d4487-5a09-44a2-8a26-bac55a5609fe to disappear
Sep  9 01:08:53.293: INFO: Pod pod-9d0d4487-5a09-44a2-8a26-bac55a5609fe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:08:53.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7530" for this suite.
Sep  9 01:08:59.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:08:59.384: INFO: namespace emptydir-7530 deletion completed in 6.087971901s

• [SLOW TEST:10.416 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:08:59.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Sep  9 01:08:59.481: INFO: Waiting up to 5m0s for pod "client-containers-d0f469b9-7f5d-43dc-8da9-962a39bf970b" in namespace "containers-8386" to be "success or failure"
Sep  9 01:08:59.484: INFO: Pod "client-containers-d0f469b9-7f5d-43dc-8da9-962a39bf970b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.82708ms
Sep  9 01:09:01.488: INFO: Pod "client-containers-d0f469b9-7f5d-43dc-8da9-962a39bf970b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007134867s
Sep  9 01:09:03.492: INFO: Pod "client-containers-d0f469b9-7f5d-43dc-8da9-962a39bf970b": Phase="Running", Reason="", readiness=true. Elapsed: 4.011266237s
Sep  9 01:09:05.497: INFO: Pod "client-containers-d0f469b9-7f5d-43dc-8da9-962a39bf970b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015566468s
STEP: Saw pod success
Sep  9 01:09:05.497: INFO: Pod "client-containers-d0f469b9-7f5d-43dc-8da9-962a39bf970b" satisfied condition "success or failure"
Sep  9 01:09:05.500: INFO: Trying to get logs from node iruya-worker pod client-containers-d0f469b9-7f5d-43dc-8da9-962a39bf970b container test-container: 
STEP: delete the pod
Sep  9 01:09:05.520: INFO: Waiting for pod client-containers-d0f469b9-7f5d-43dc-8da9-962a39bf970b to disappear
Sep  9 01:09:05.525: INFO: Pod client-containers-d0f469b9-7f5d-43dc-8da9-962a39bf970b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:09:05.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8386" for this suite.
Sep  9 01:09:11.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:09:11.666: INFO: namespace containers-8386 deletion completed in 6.138224244s

• [SLOW TEST:12.282 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:09:11.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-ca98853a-23e6-4093-9007-f714b7c28078
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:09:11.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-259" for this suite.
Sep  9 01:09:17.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:09:17.849: INFO: namespace secrets-259 deletion completed in 6.091519219s

• [SLOW TEST:6.182 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:09:17.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Sep  9 01:09:17.940: INFO: PodSpec: initContainers in spec.initContainers
Sep  9 01:10:03.430: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e4587790-75c5-472e-9c39-05085cbb4757", GenerateName:"", Namespace:"init-container-3941", SelfLink:"/api/v1/namespaces/init-container-3941/pods/pod-init-e4587790-75c5-472e-9c39-05085cbb4757", UID:"e3397380-16cd-48f6-b423-d30c2f9b86cb", ResourceVersion:"330596", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735210557, loc:(*time.Location)(0x7edea20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"940712312"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hg752", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002fdc080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hg752", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hg752", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hg752", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00319a088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003682000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00319a110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00319a130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00319a138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00319a13c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735210558, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735210558, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735210558, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735210557, loc:(*time.Location)(0x7edea20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.9", PodIP:"10.244.1.127", StartTime:(*v1.Time)(0xc0022c8100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a510a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a51110)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://28419f123b1811088af61a5749a940ae7fc38a66f231b344ef3c404d33ef2fcc"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022c8180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022c8160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:10:03.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3941" for this suite.
Sep  9 01:10:25.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:10:25.556: INFO: namespace init-container-3941 deletion completed in 22.098096926s

• [SLOW TEST:67.707 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:10:25.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Sep  9 01:10:25.602: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:10:31.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7620" for this suite.
Sep  9 01:10:37.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:10:37.899: INFO: namespace init-container-7620 deletion completed in 6.115758981s

• [SLOW TEST:12.343 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:10:37.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-5bcf1263-95b5-45fa-b213-9bb6ef304c44
STEP: Creating secret with name s-test-opt-upd-453c9e8d-8a2f-4291-a4c0-a9c6d43ef073
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5bcf1263-95b5-45fa-b213-9bb6ef304c44
STEP: Updating secret s-test-opt-upd-453c9e8d-8a2f-4291-a4c0-a9c6d43ef073
STEP: Creating secret with name s-test-opt-create-04f3222a-e58e-4847-865b-2a13f515b370
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:10:48.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5705" for this suite.
Sep  9 01:11:12.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:11:12.215: INFO: namespace projected-5705 deletion completed in 24.101597139s

• [SLOW TEST:34.316 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:11:12.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-0f2262bf-717a-4391-b426-18a104e6451b
STEP: Creating a pod to test consume secrets
Sep  9 01:11:12.299: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d6b10b00-8046-4d36-b182-08dcd23fbb8b" in namespace "projected-8176" to be "success or failure"
Sep  9 01:11:12.307: INFO: Pod "pod-projected-secrets-d6b10b00-8046-4d36-b182-08dcd23fbb8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176866ms
Sep  9 01:11:14.311: INFO: Pod "pod-projected-secrets-d6b10b00-8046-4d36-b182-08dcd23fbb8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012334512s
Sep  9 01:11:16.315: INFO: Pod "pod-projected-secrets-d6b10b00-8046-4d36-b182-08dcd23fbb8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016606014s
STEP: Saw pod success
Sep  9 01:11:16.315: INFO: Pod "pod-projected-secrets-d6b10b00-8046-4d36-b182-08dcd23fbb8b" satisfied condition "success or failure"
Sep  9 01:11:16.319: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-d6b10b00-8046-4d36-b182-08dcd23fbb8b container projected-secret-volume-test: 
STEP: delete the pod
Sep  9 01:11:16.353: INFO: Waiting for pod pod-projected-secrets-d6b10b00-8046-4d36-b182-08dcd23fbb8b to disappear
Sep  9 01:11:16.366: INFO: Pod pod-projected-secrets-d6b10b00-8046-4d36-b182-08dcd23fbb8b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:11:16.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8176" for this suite.
Sep  9 01:11:22.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:11:22.532: INFO: namespace projected-8176 deletion completed in 6.161739145s

• [SLOW TEST:10.316 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:11:22.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 01:11:22.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a090caa-82b1-45ef-b0c8-41ed9136e7fc" in namespace "projected-1324" to be "success or failure"
Sep  9 01:11:22.642: INFO: Pod "downwardapi-volume-8a090caa-82b1-45ef-b0c8-41ed9136e7fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382581ms
Sep  9 01:11:24.647: INFO: Pod "downwardapi-volume-8a090caa-82b1-45ef-b0c8-41ed9136e7fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010787879s
Sep  9 01:11:26.657: INFO: Pod "downwardapi-volume-8a090caa-82b1-45ef-b0c8-41ed9136e7fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020803752s
STEP: Saw pod success
Sep  9 01:11:26.657: INFO: Pod "downwardapi-volume-8a090caa-82b1-45ef-b0c8-41ed9136e7fc" satisfied condition "success or failure"
Sep  9 01:11:26.659: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8a090caa-82b1-45ef-b0c8-41ed9136e7fc container client-container: 
STEP: delete the pod
Sep  9 01:11:26.690: INFO: Waiting for pod downwardapi-volume-8a090caa-82b1-45ef-b0c8-41ed9136e7fc to disappear
Sep  9 01:11:26.714: INFO: Pod downwardapi-volume-8a090caa-82b1-45ef-b0c8-41ed9136e7fc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:11:26.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1324" for this suite.
Sep  9 01:11:32.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:11:32.828: INFO: namespace projected-1324 deletion completed in 6.111097917s

• [SLOW TEST:10.296 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:11:32.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-n8dt
STEP: Creating a pod to test atomic-volume-subpath
Sep  9 01:11:32.925: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-n8dt" in namespace "subpath-6535" to be "success or failure"
Sep  9 01:11:32.929: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.396596ms
Sep  9 01:11:34.934: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00894624s
Sep  9 01:11:36.944: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Running", Reason="", readiness=true. Elapsed: 4.01940085s
Sep  9 01:11:38.949: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Running", Reason="", readiness=true. Elapsed: 6.023943706s
Sep  9 01:11:40.952: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Running", Reason="", readiness=true. Elapsed: 8.027776806s
Sep  9 01:11:42.956: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Running", Reason="", readiness=true. Elapsed: 10.031641048s
Sep  9 01:11:44.960: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Running", Reason="", readiness=true. Elapsed: 12.035658979s
Sep  9 01:11:46.963: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Running", Reason="", readiness=true. Elapsed: 14.038825189s
Sep  9 01:11:48.967: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Running", Reason="", readiness=true. Elapsed: 16.042888479s
Sep  9 01:11:50.971: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Running", Reason="", readiness=true. Elapsed: 18.046238209s
Sep  9 01:11:52.975: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Running", Reason="", readiness=true. Elapsed: 20.049957984s
Sep  9 01:11:54.978: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Running", Reason="", readiness=true. Elapsed: 22.053809444s
Sep  9 01:11:56.982: INFO: Pod "pod-subpath-test-downwardapi-n8dt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057820326s
STEP: Saw pod success
Sep  9 01:11:56.982: INFO: Pod "pod-subpath-test-downwardapi-n8dt" satisfied condition "success or failure"
Sep  9 01:11:56.985: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-n8dt container test-container-subpath-downwardapi-n8dt: 
STEP: delete the pod
Sep  9 01:11:57.003: INFO: Waiting for pod pod-subpath-test-downwardapi-n8dt to disappear
Sep  9 01:11:57.034: INFO: Pod pod-subpath-test-downwardapi-n8dt no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-n8dt
Sep  9 01:11:57.034: INFO: Deleting pod "pod-subpath-test-downwardapi-n8dt" in namespace "subpath-6535"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:11:57.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6535" for this suite.
Sep  9 01:12:03.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:12:03.221: INFO: namespace subpath-6535 deletion completed in 6.180579531s

• [SLOW TEST:30.392 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:12:03.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2374
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Sep  9 01:12:03.320: INFO: Found 0 stateful pods, waiting for 3
Sep  9 01:12:13.353: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep  9 01:12:13.353: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep  9 01:12:13.353: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Sep  9 01:12:23.325: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep  9 01:12:23.325: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep  9 01:12:23.325: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Sep  9 01:12:23.351: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Sep  9 01:12:33.403: INFO: Updating stateful set ss2
Sep  9 01:12:33.430: INFO: Waiting for Pod statefulset-2374/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Sep  9 01:12:43.609: INFO: Found 2 stateful pods, waiting for 3
Sep  9 01:12:53.613: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep  9 01:12:53.613: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep  9 01:12:53.613: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Sep  9 01:12:53.635: INFO: Updating stateful set ss2
Sep  9 01:12:53.713: INFO: Waiting for Pod statefulset-2374/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Sep  9 01:13:03.738: INFO: Updating stateful set ss2
Sep  9 01:13:03.784: INFO: Waiting for StatefulSet statefulset-2374/ss2 to complete update
Sep  9 01:13:03.784: INFO: Waiting for Pod statefulset-2374/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep  9 01:13:13.793: INFO: Deleting all statefulset in ns statefulset-2374
Sep  9 01:13:13.796: INFO: Scaling statefulset ss2 to 0
Sep  9 01:13:33.817: INFO: Waiting for statefulset status.replicas updated to 0
Sep  9 01:13:33.821: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:13:33.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2374" for this suite.
Sep  9 01:13:39.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:13:39.957: INFO: namespace statefulset-2374 deletion completed in 6.095916431s

• [SLOW TEST:96.736 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:13:39.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 01:13:40.025: INFO: Creating ReplicaSet my-hostname-basic-32e15618-4487-44ea-850e-9d8b30bb771d
Sep  9 01:13:40.051: INFO: Pod name my-hostname-basic-32e15618-4487-44ea-850e-9d8b30bb771d: Found 0 pods out of 1
Sep  9 01:13:45.055: INFO: Pod name my-hostname-basic-32e15618-4487-44ea-850e-9d8b30bb771d: Found 1 pods out of 1
Sep  9 01:13:45.055: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-32e15618-4487-44ea-850e-9d8b30bb771d" is running
Sep  9 01:13:45.058: INFO: Pod "my-hostname-basic-32e15618-4487-44ea-850e-9d8b30bb771d-9p76q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 01:13:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 01:13:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 01:13:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 01:13:40 +0000 UTC Reason: Message:}])
Sep  9 01:13:45.058: INFO: Trying to dial the pod
Sep  9 01:13:50.068: INFO: Controller my-hostname-basic-32e15618-4487-44ea-850e-9d8b30bb771d: Got expected result from replica 1 [my-hostname-basic-32e15618-4487-44ea-850e-9d8b30bb771d-9p76q]: "my-hostname-basic-32e15618-4487-44ea-850e-9d8b30bb771d-9p76q", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:13:50.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5141" for this suite.
Sep  9 01:13:56.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:13:56.168: INFO: namespace replicaset-5141 deletion completed in 6.097365744s

• [SLOW TEST:16.212 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:13:56.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Sep  9 01:14:06.264: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-461 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:14:06.264: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:14:06.306595       6 log.go:172] (0xc00162c9a0) (0xc003e2a820) Create stream
I0909 01:14:06.306628       6 log.go:172] (0xc00162c9a0) (0xc003e2a820) Stream added, broadcasting: 1
I0909 01:14:06.309195       6 log.go:172] (0xc00162c9a0) Reply frame received for 1
I0909 01:14:06.309222       6 log.go:172] (0xc00162c9a0) (0xc003e2a960) Create stream
I0909 01:14:06.309228       6 log.go:172] (0xc00162c9a0) (0xc003e2a960) Stream added, broadcasting: 3
I0909 01:14:06.310275       6 log.go:172] (0xc00162c9a0) Reply frame received for 3
I0909 01:14:06.310306       6 log.go:172] (0xc00162c9a0) (0xc002809e00) Create stream
I0909 01:14:06.310322       6 log.go:172] (0xc00162c9a0) (0xc002809e00) Stream added, broadcasting: 5
I0909 01:14:06.311420       6 log.go:172] (0xc00162c9a0) Reply frame received for 5
I0909 01:14:06.377436       6 log.go:172] (0xc00162c9a0) Data frame received for 5
I0909 01:14:06.377487       6 log.go:172] (0xc002809e00) (5) Data frame handling
I0909 01:14:06.377513       6 log.go:172] (0xc00162c9a0) Data frame received for 3
I0909 01:14:06.377525       6 log.go:172] (0xc003e2a960) (3) Data frame handling
I0909 01:14:06.377539       6 log.go:172] (0xc003e2a960) (3) Data frame sent
I0909 01:14:06.377550       6 log.go:172] (0xc00162c9a0) Data frame received for 3
I0909 01:14:06.377561       6 log.go:172] (0xc003e2a960) (3) Data frame handling
I0909 01:14:06.381829       6 log.go:172] (0xc00162c9a0) Data frame received for 1
I0909 01:14:06.381861       6 log.go:172] (0xc003e2a820) (1) Data frame handling
I0909 01:14:06.381874       6 log.go:172] (0xc003e2a820) (1) Data frame sent
I0909 01:14:06.381886       6 log.go:172] (0xc00162c9a0) (0xc003e2a820) Stream removed, broadcasting: 1
I0909 01:14:06.381903       6 log.go:172] (0xc00162c9a0) Go away received
I0909 01:14:06.382089       6 log.go:172] (0xc00162c9a0) (0xc003e2a820) Stream removed, broadcasting: 1
I0909 01:14:06.382116       6 log.go:172] (0xc00162c9a0) (0xc003e2a960) Stream removed, broadcasting: 3
I0909 01:14:06.382150       6 log.go:172] (0xc00162c9a0) (0xc002809e00) Stream removed, broadcasting: 5
Sep  9 01:14:06.382: INFO: Exec stderr: ""
Sep  9 01:14:06.382: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-461 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:14:06.382: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:14:06.412844       6 log.go:172] (0xc00056dce0) (0xc001750140) Create stream
I0909 01:14:06.412874       6 log.go:172] (0xc00056dce0) (0xc001750140) Stream added, broadcasting: 1
I0909 01:14:06.415409       6 log.go:172] (0xc00056dce0) Reply frame received for 1
I0909 01:14:06.415438       6 log.go:172] (0xc00056dce0) (0xc000674140) Create stream
I0909 01:14:06.415458       6 log.go:172] (0xc00056dce0) (0xc000674140) Stream added, broadcasting: 3
I0909 01:14:06.416614       6 log.go:172] (0xc00056dce0) Reply frame received for 3
I0909 01:14:06.416651       6 log.go:172] (0xc00056dce0) (0xc003add540) Create stream
I0909 01:14:06.416664       6 log.go:172] (0xc00056dce0) (0xc003add540) Stream added, broadcasting: 5
I0909 01:14:06.417518       6 log.go:172] (0xc00056dce0) Reply frame received for 5
I0909 01:14:06.472938       6 log.go:172] (0xc00056dce0) Data frame received for 5
I0909 01:14:06.472980       6 log.go:172] (0xc003add540) (5) Data frame handling
I0909 01:14:06.473009       6 log.go:172] (0xc00056dce0) Data frame received for 3
I0909 01:14:06.473020       6 log.go:172] (0xc000674140) (3) Data frame handling
I0909 01:14:06.473042       6 log.go:172] (0xc000674140) (3) Data frame sent
I0909 01:14:06.473078       6 log.go:172] (0xc00056dce0) Data frame received for 3
I0909 01:14:06.473100       6 log.go:172] (0xc000674140) (3) Data frame handling
I0909 01:14:06.474883       6 log.go:172] (0xc00056dce0) Data frame received for 1
I0909 01:14:06.474930       6 log.go:172] (0xc001750140) (1) Data frame handling
I0909 01:14:06.474951       6 log.go:172] (0xc001750140) (1) Data frame sent
I0909 01:14:06.474973       6 log.go:172] (0xc00056dce0) (0xc001750140) Stream removed, broadcasting: 1
I0909 01:14:06.475000       6 log.go:172] (0xc00056dce0) Go away received
I0909 01:14:06.475147       6 log.go:172] (0xc00056dce0) (0xc001750140) Stream removed, broadcasting: 1
I0909 01:14:06.475184       6 log.go:172] (0xc00056dce0) (0xc000674140) Stream removed, broadcasting: 3
I0909 01:14:06.475213       6 log.go:172] (0xc00056dce0) (0xc003add540) Stream removed, broadcasting: 5
Sep  9 01:14:06.475: INFO: Exec stderr: ""
Sep  9 01:14:06.475: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-461 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:14:06.475: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:14:06.509178       6 log.go:172] (0xc000e6b340) (0xc003add900) Create stream
I0909 01:14:06.509219       6 log.go:172] (0xc000e6b340) (0xc003add900) Stream added, broadcasting: 1
I0909 01:14:06.512147       6 log.go:172] (0xc000e6b340) Reply frame received for 1
I0909 01:14:06.512198       6 log.go:172] (0xc000e6b340) (0xc000674280) Create stream
I0909 01:14:06.512214       6 log.go:172] (0xc000e6b340) (0xc000674280) Stream added, broadcasting: 3
I0909 01:14:06.513297       6 log.go:172] (0xc000e6b340) Reply frame received for 3
I0909 01:14:06.513325       6 log.go:172] (0xc000e6b340) (0xc000674320) Create stream
I0909 01:14:06.513338       6 log.go:172] (0xc000e6b340) (0xc000674320) Stream added, broadcasting: 5
I0909 01:14:06.514341       6 log.go:172] (0xc000e6b340) Reply frame received for 5
I0909 01:14:06.557632       6 log.go:172] (0xc000e6b340) Data frame received for 3
I0909 01:14:06.557681       6 log.go:172] (0xc000674280) (3) Data frame handling
I0909 01:14:06.557713       6 log.go:172] (0xc000674280) (3) Data frame sent
I0909 01:14:06.557753       6 log.go:172] (0xc000e6b340) Data frame received for 5
I0909 01:14:06.557805       6 log.go:172] (0xc000674320) (5) Data frame handling
I0909 01:14:06.557837       6 log.go:172] (0xc000e6b340) Data frame received for 3
I0909 01:14:06.557854       6 log.go:172] (0xc000674280) (3) Data frame handling
I0909 01:14:06.559631       6 log.go:172] (0xc000e6b340) Data frame received for 1
I0909 01:14:06.559696       6 log.go:172] (0xc003add900) (1) Data frame handling
I0909 01:14:06.559740       6 log.go:172] (0xc003add900) (1) Data frame sent
I0909 01:14:06.559758       6 log.go:172] (0xc000e6b340) (0xc003add900) Stream removed, broadcasting: 1
I0909 01:14:06.559848       6 log.go:172] (0xc000e6b340) (0xc003add900) Stream removed, broadcasting: 1
I0909 01:14:06.559865       6 log.go:172] (0xc000e6b340) (0xc000674280) Stream removed, broadcasting: 3
I0909 01:14:06.560169       6 log.go:172] (0xc000e6b340) Go away received
I0909 01:14:06.560248       6 log.go:172] (0xc000e6b340) (0xc000674320) Stream removed, broadcasting: 5
Sep  9 01:14:06.560: INFO: Exec stderr: ""
Sep  9 01:14:06.560: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-461 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:14:06.560: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:14:06.591942       6 log.go:172] (0xc000e6bef0) (0xc003addc20) Create stream
I0909 01:14:06.591967       6 log.go:172] (0xc000e6bef0) (0xc003addc20) Stream added, broadcasting: 1
I0909 01:14:06.594489       6 log.go:172] (0xc000e6bef0) Reply frame received for 1
I0909 01:14:06.594534       6 log.go:172] (0xc000e6bef0) (0xc003e2aa00) Create stream
I0909 01:14:06.594546       6 log.go:172] (0xc000e6bef0) (0xc003e2aa00) Stream added, broadcasting: 3
I0909 01:14:06.595605       6 log.go:172] (0xc000e6bef0) Reply frame received for 3
I0909 01:14:06.595653       6 log.go:172] (0xc000e6bef0) (0xc0017501e0) Create stream
I0909 01:14:06.595669       6 log.go:172] (0xc000e6bef0) (0xc0017501e0) Stream added, broadcasting: 5
I0909 01:14:06.596815       6 log.go:172] (0xc000e6bef0) Reply frame received for 5
I0909 01:14:06.668908       6 log.go:172] (0xc000e6bef0) Data frame received for 5
I0909 01:14:06.668941       6 log.go:172] (0xc0017501e0) (5) Data frame handling
I0909 01:14:06.668995       6 log.go:172] (0xc000e6bef0) Data frame received for 3
I0909 01:14:06.669036       6 log.go:172] (0xc003e2aa00) (3) Data frame handling
I0909 01:14:06.669062       6 log.go:172] (0xc003e2aa00) (3) Data frame sent
I0909 01:14:06.669082       6 log.go:172] (0xc000e6bef0) Data frame received for 3
I0909 01:14:06.669095       6 log.go:172] (0xc003e2aa00) (3) Data frame handling
I0909 01:14:06.670514       6 log.go:172] (0xc000e6bef0) Data frame received for 1
I0909 01:14:06.670554       6 log.go:172] (0xc003addc20) (1) Data frame handling
I0909 01:14:06.670568       6 log.go:172] (0xc003addc20) (1) Data frame sent
I0909 01:14:06.670692       6 log.go:172] (0xc000e6bef0) (0xc003addc20) Stream removed, broadcasting: 1
I0909 01:14:06.670767       6 log.go:172] (0xc000e6bef0) (0xc003addc20) Stream removed, broadcasting: 1
I0909 01:14:06.670782       6 log.go:172] (0xc000e6bef0) (0xc003e2aa00) Stream removed, broadcasting: 3
I0909 01:14:06.670801       6 log.go:172] (0xc000e6bef0) (0xc0017501e0) Stream removed, broadcasting: 5
Sep  9 01:14:06.670: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Sep  9 01:14:06.670: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-461 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:14:06.670: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:14:06.670894       6 log.go:172] (0xc000e6bef0) Go away received
I0909 01:14:06.733878       6 log.go:172] (0xc001bd2b00) (0xc0017505a0) Create stream
I0909 01:14:06.733907       6 log.go:172] (0xc001bd2b00) (0xc0017505a0) Stream added, broadcasting: 1
I0909 01:14:06.736157       6 log.go:172] (0xc001bd2b00) Reply frame received for 1
I0909 01:14:06.736181       6 log.go:172] (0xc001bd2b00) (0xc003addcc0) Create stream
I0909 01:14:06.736191       6 log.go:172] (0xc001bd2b00) (0xc003addcc0) Stream added, broadcasting: 3
I0909 01:14:06.737320       6 log.go:172] (0xc001bd2b00) Reply frame received for 3
I0909 01:14:06.737370       6 log.go:172] (0xc001bd2b00) (0xc003addd60) Create stream
I0909 01:14:06.737387       6 log.go:172] (0xc001bd2b00) (0xc003addd60) Stream added, broadcasting: 5
I0909 01:14:06.738371       6 log.go:172] (0xc001bd2b00) Reply frame received for 5
I0909 01:14:06.806735       6 log.go:172] (0xc001bd2b00) Data frame received for 3
I0909 01:14:06.806787       6 log.go:172] (0xc003addcc0) (3) Data frame handling
I0909 01:14:06.806823       6 log.go:172] (0xc003addcc0) (3) Data frame sent
I0909 01:14:06.806846       6 log.go:172] (0xc001bd2b00) Data frame received for 3
I0909 01:14:06.806860       6 log.go:172] (0xc003addcc0) (3) Data frame handling
I0909 01:14:06.806884       6 log.go:172] (0xc001bd2b00) Data frame received for 5
I0909 01:14:06.806903       6 log.go:172] (0xc003addd60) (5) Data frame handling
I0909 01:14:06.808511       6 log.go:172] (0xc001bd2b00) Data frame received for 1
I0909 01:14:06.808527       6 log.go:172] (0xc0017505a0) (1) Data frame handling
I0909 01:14:06.808538       6 log.go:172] (0xc0017505a0) (1) Data frame sent
I0909 01:14:06.808675       6 log.go:172] (0xc001bd2b00) (0xc0017505a0) Stream removed, broadcasting: 1
I0909 01:14:06.808747       6 log.go:172] (0xc001bd2b00) (0xc0017505a0) Stream removed, broadcasting: 1
I0909 01:14:06.808759       6 log.go:172] (0xc001bd2b00) (0xc003addcc0) Stream removed, broadcasting: 3
I0909 01:14:06.808789       6 log.go:172] (0xc001bd2b00) Go away received
I0909 01:14:06.808829       6 log.go:172] (0xc001bd2b00) (0xc003addd60) Stream removed, broadcasting: 5
Sep  9 01:14:06.808: INFO: Exec stderr: ""
Sep  9 01:14:06.808: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-461 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:14:06.808: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:14:06.840236       6 log.go:172] (0xc000e11a20) (0xc000674a00) Create stream
I0909 01:14:06.840263       6 log.go:172] (0xc000e11a20) (0xc000674a00) Stream added, broadcasting: 1
I0909 01:14:06.845647       6 log.go:172] (0xc000e11a20) Reply frame received for 1
I0909 01:14:06.845708       6 log.go:172] (0xc000e11a20) (0xc0022d25a0) Create stream
I0909 01:14:06.845739       6 log.go:172] (0xc000e11a20) (0xc0022d25a0) Stream added, broadcasting: 3
I0909 01:14:06.846999       6 log.go:172] (0xc000e11a20) Reply frame received for 3
I0909 01:14:06.847025       6 log.go:172] (0xc000e11a20) (0xc0022d26e0) Create stream
I0909 01:14:06.847033       6 log.go:172] (0xc000e11a20) (0xc0022d26e0) Stream added, broadcasting: 5
I0909 01:14:06.847877       6 log.go:172] (0xc000e11a20) Reply frame received for 5
I0909 01:14:06.911910       6 log.go:172] (0xc000e11a20) Data frame received for 3
I0909 01:14:06.911960       6 log.go:172] (0xc0022d25a0) (3) Data frame handling
I0909 01:14:06.911992       6 log.go:172] (0xc0022d25a0) (3) Data frame sent
I0909 01:14:06.912113       6 log.go:172] (0xc000e11a20) Data frame received for 3
I0909 01:14:06.912122       6 log.go:172] (0xc0022d25a0) (3) Data frame handling
I0909 01:14:06.912140       6 log.go:172] (0xc000e11a20) Data frame received for 5
I0909 01:14:06.912146       6 log.go:172] (0xc0022d26e0) (5) Data frame handling
I0909 01:14:06.913233       6 log.go:172] (0xc000e11a20) Data frame received for 1
I0909 01:14:06.913255       6 log.go:172] (0xc000674a00) (1) Data frame handling
I0909 01:14:06.913275       6 log.go:172] (0xc000674a00) (1) Data frame sent
I0909 01:14:06.913382       6 log.go:172] (0xc000e11a20) (0xc000674a00) Stream removed, broadcasting: 1
I0909 01:14:06.913412       6 log.go:172] (0xc000e11a20) Go away received
I0909 01:14:06.913523       6 log.go:172] (0xc000e11a20) (0xc000674a00) Stream removed, broadcasting: 1
I0909 01:14:06.913560       6 log.go:172] (0xc000e11a20) (0xc0022d25a0) Stream removed, broadcasting: 3
I0909 01:14:06.913573       6 log.go:172] (0xc000e11a20) (0xc0022d26e0) Stream removed, broadcasting: 5
Sep  9 01:14:06.913: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Sep  9 01:14:06.913: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-461 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:14:06.913: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:14:06.950137       6 log.go:172] (0xc0020be840) (0xc000675040) Create stream
I0909 01:14:06.950169       6 log.go:172] (0xc0020be840) (0xc000675040) Stream added, broadcasting: 1
I0909 01:14:06.952822       6 log.go:172] (0xc0020be840) Reply frame received for 1
I0909 01:14:06.952880       6 log.go:172] (0xc0020be840) (0xc003adde00) Create stream
I0909 01:14:06.952902       6 log.go:172] (0xc0020be840) (0xc003adde00) Stream added, broadcasting: 3
I0909 01:14:06.953918       6 log.go:172] (0xc0020be840) Reply frame received for 3
I0909 01:14:06.953961       6 log.go:172] (0xc0020be840) (0xc003addea0) Create stream
I0909 01:14:06.953970       6 log.go:172] (0xc0020be840) (0xc003addea0) Stream added, broadcasting: 5
I0909 01:14:06.954843       6 log.go:172] (0xc0020be840) Reply frame received for 5
I0909 01:14:07.012710       6 log.go:172] (0xc0020be840) Data frame received for 5
I0909 01:14:07.012761       6 log.go:172] (0xc003addea0) (5) Data frame handling
I0909 01:14:07.012799       6 log.go:172] (0xc0020be840) Data frame received for 3
I0909 01:14:07.012821       6 log.go:172] (0xc003adde00) (3) Data frame handling
I0909 01:14:07.012850       6 log.go:172] (0xc003adde00) (3) Data frame sent
I0909 01:14:07.012871       6 log.go:172] (0xc0020be840) Data frame received for 3
I0909 01:14:07.012891       6 log.go:172] (0xc003adde00) (3) Data frame handling
I0909 01:14:07.014298       6 log.go:172] (0xc0020be840) Data frame received for 1
I0909 01:14:07.014339       6 log.go:172] (0xc000675040) (1) Data frame handling
I0909 01:14:07.014358       6 log.go:172] (0xc000675040) (1) Data frame sent
I0909 01:14:07.014375       6 log.go:172] (0xc0020be840) (0xc000675040) Stream removed, broadcasting: 1
I0909 01:14:07.014399       6 log.go:172] (0xc0020be840) Go away received
I0909 01:14:07.014503       6 log.go:172] (0xc0020be840) (0xc000675040) Stream removed, broadcasting: 1
I0909 01:14:07.014536       6 log.go:172] (0xc0020be840) (0xc003adde00) Stream removed, broadcasting: 3
I0909 01:14:07.014560       6 log.go:172] (0xc0020be840) (0xc003addea0) Stream removed, broadcasting: 5
Sep  9 01:14:07.014: INFO: Exec stderr: ""
Sep  9 01:14:07.014: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-461 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:14:07.014: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:14:07.048598       6 log.go:172] (0xc00162da20) (0xc003e2ad20) Create stream
I0909 01:14:07.048625       6 log.go:172] (0xc00162da20) (0xc003e2ad20) Stream added, broadcasting: 1
I0909 01:14:07.052943       6 log.go:172] (0xc00162da20) Reply frame received for 1
I0909 01:14:07.053004       6 log.go:172] (0xc00162da20) (0xc003e2adc0) Create stream
I0909 01:14:07.053038       6 log.go:172] (0xc00162da20) (0xc003e2adc0) Stream added, broadcasting: 3
I0909 01:14:07.054799       6 log.go:172] (0xc00162da20) Reply frame received for 3
I0909 01:14:07.054855       6 log.go:172] (0xc00162da20) (0xc003e2ae60) Create stream
I0909 01:14:07.054869       6 log.go:172] (0xc00162da20) (0xc003e2ae60) Stream added, broadcasting: 5
I0909 01:14:07.055732       6 log.go:172] (0xc00162da20) Reply frame received for 5
I0909 01:14:07.104353       6 log.go:172] (0xc00162da20) Data frame received for 5
I0909 01:14:07.104386       6 log.go:172] (0xc003e2ae60) (5) Data frame handling
I0909 01:14:07.104427       6 log.go:172] (0xc00162da20) Data frame received for 3
I0909 01:14:07.104442       6 log.go:172] (0xc003e2adc0) (3) Data frame handling
I0909 01:14:07.104458       6 log.go:172] (0xc003e2adc0) (3) Data frame sent
I0909 01:14:07.104469       6 log.go:172] (0xc00162da20) Data frame received for 3
I0909 01:14:07.104479       6 log.go:172] (0xc003e2adc0) (3) Data frame handling
I0909 01:14:07.106013       6 log.go:172] (0xc00162da20) Data frame received for 1
I0909 01:14:07.106029       6 log.go:172] (0xc003e2ad20) (1) Data frame handling
I0909 01:14:07.106040       6 log.go:172] (0xc003e2ad20) (1) Data frame sent
I0909 01:14:07.106052       6 log.go:172] (0xc00162da20) (0xc003e2ad20) Stream removed, broadcasting: 1
I0909 01:14:07.106064       6 log.go:172] (0xc00162da20) Go away received
I0909 01:14:07.106219       6 log.go:172] (0xc00162da20) (0xc003e2ad20) Stream removed, broadcasting: 1
I0909 01:14:07.106249       6 log.go:172] (0xc00162da20) (0xc003e2adc0) Stream removed, broadcasting: 3
I0909 01:14:07.106262       6 log.go:172] (0xc00162da20) (0xc003e2ae60) Stream removed, broadcasting: 5
Sep  9 01:14:07.106: INFO: Exec stderr: ""
Sep  9 01:14:07.106: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-461 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:14:07.106: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:14:07.130833       6 log.go:172] (0xc000f476b0) (0xc0022d2e60) Create stream
I0909 01:14:07.130854       6 log.go:172] (0xc000f476b0) (0xc0022d2e60) Stream added, broadcasting: 1
I0909 01:14:07.132976       6 log.go:172] (0xc000f476b0) Reply frame received for 1
I0909 01:14:07.133048       6 log.go:172] (0xc000f476b0) (0xc003addf40) Create stream
I0909 01:14:07.133061       6 log.go:172] (0xc000f476b0) (0xc003addf40) Stream added, broadcasting: 3
I0909 01:14:07.134113       6 log.go:172] (0xc000f476b0) Reply frame received for 3
I0909 01:14:07.134155       6 log.go:172] (0xc000f476b0) (0xc001750640) Create stream
I0909 01:14:07.134170       6 log.go:172] (0xc000f476b0) (0xc001750640) Stream added, broadcasting: 5
I0909 01:14:07.135235       6 log.go:172] (0xc000f476b0) Reply frame received for 5
I0909 01:14:07.187981       6 log.go:172] (0xc000f476b0) Data frame received for 3
I0909 01:14:07.188062       6 log.go:172] (0xc003addf40) (3) Data frame handling
I0909 01:14:07.188092       6 log.go:172] (0xc003addf40) (3) Data frame sent
I0909 01:14:07.188097       6 log.go:172] (0xc000f476b0) Data frame received for 3
I0909 01:14:07.188101       6 log.go:172] (0xc003addf40) (3) Data frame handling
I0909 01:14:07.188180       6 log.go:172] (0xc000f476b0) Data frame received for 5
I0909 01:14:07.188199       6 log.go:172] (0xc001750640) (5) Data frame handling
I0909 01:14:07.189946       6 log.go:172] (0xc000f476b0) Data frame received for 1
I0909 01:14:07.189972       6 log.go:172] (0xc0022d2e60) (1) Data frame handling
I0909 01:14:07.189985       6 log.go:172] (0xc0022d2e60) (1) Data frame sent
I0909 01:14:07.189994       6 log.go:172] (0xc000f476b0) (0xc0022d2e60) Stream removed, broadcasting: 1
I0909 01:14:07.190004       6 log.go:172] (0xc000f476b0) Go away received
I0909 01:14:07.190196       6 log.go:172] (0xc000f476b0) (0xc0022d2e60) Stream removed, broadcasting: 1
I0909 01:14:07.190224       6 log.go:172] (0xc000f476b0) (0xc003addf40) Stream removed, broadcasting: 3
I0909 01:14:07.190248       6 log.go:172] (0xc000f476b0) (0xc001750640) Stream removed, broadcasting: 5
Sep  9 01:14:07.190: INFO: Exec stderr: ""
Sep  9 01:14:07.190: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-461 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:14:07.190: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:14:07.227674       6 log.go:172] (0xc001bd3ad0) (0xc001750960) Create stream
I0909 01:14:07.227698       6 log.go:172] (0xc001bd3ad0) (0xc001750960) Stream added, broadcasting: 1
I0909 01:14:07.229941       6 log.go:172] (0xc001bd3ad0) Reply frame received for 1
I0909 01:14:07.229994       6 log.go:172] (0xc001bd3ad0) (0xc0006750e0) Create stream
I0909 01:14:07.230012       6 log.go:172] (0xc001bd3ad0) (0xc0006750e0) Stream added, broadcasting: 3
I0909 01:14:07.230938       6 log.go:172] (0xc001bd3ad0) Reply frame received for 3
I0909 01:14:07.230969       6 log.go:172] (0xc001bd3ad0) (0xc000675180) Create stream
I0909 01:14:07.230985       6 log.go:172] (0xc001bd3ad0) (0xc000675180) Stream added, broadcasting: 5
I0909 01:14:07.232231       6 log.go:172] (0xc001bd3ad0) Reply frame received for 5
I0909 01:14:07.287996       6 log.go:172] (0xc001bd3ad0) Data frame received for 5
I0909 01:14:07.288100       6 log.go:172] (0xc000675180) (5) Data frame handling
I0909 01:14:07.288143       6 log.go:172] (0xc001bd3ad0) Data frame received for 3
I0909 01:14:07.288170       6 log.go:172] (0xc0006750e0) (3) Data frame handling
I0909 01:14:07.288193       6 log.go:172] (0xc0006750e0) (3) Data frame sent
I0909 01:14:07.288230       6 log.go:172] (0xc001bd3ad0) Data frame received for 3
I0909 01:14:07.288249       6 log.go:172] (0xc0006750e0) (3) Data frame handling
I0909 01:14:07.289480       6 log.go:172] (0xc001bd3ad0) Data frame received for 1
I0909 01:14:07.289509       6 log.go:172] (0xc001750960) (1) Data frame handling
I0909 01:14:07.289534       6 log.go:172] (0xc001750960) (1) Data frame sent
I0909 01:14:07.289557       6 log.go:172] (0xc001bd3ad0) (0xc001750960) Stream removed, broadcasting: 1
I0909 01:14:07.289621       6 log.go:172] (0xc001bd3ad0) Go away received
I0909 01:14:07.289645       6 log.go:172] (0xc001bd3ad0) (0xc001750960) Stream removed, broadcasting: 1
I0909 01:14:07.289660       6 log.go:172] (0xc001bd3ad0) (0xc0006750e0) Stream removed, broadcasting: 3
I0909 01:14:07.289671       6 log.go:172] (0xc001bd3ad0) (0xc000675180) Stream removed, broadcasting: 5
Sep  9 01:14:07.289: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:14:07.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-461" for this suite.
Sep  9 01:14:47.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:14:47.386: INFO: namespace e2e-kubelet-etc-hosts-461 deletion completed in 40.093070286s

• [SLOW TEST:51.218 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:14:47.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 01:14:47.442: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Sep  9 01:14:52.447: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep  9 01:14:52.447: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep  9 01:14:52.519: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8346,SelfLink:/apis/apps/v1/namespaces/deployment-8346/deployments/test-cleanup-deployment,UID:0ba8fd2b-d02b-4358-ac3a-a381cb6fc6c9,ResourceVersion:331693,Generation:1,CreationTimestamp:2020-09-09 01:14:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Sep  9 01:14:52.528: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8346,SelfLink:/apis/apps/v1/namespaces/deployment-8346/replicasets/test-cleanup-deployment-55bbcbc84c,UID:b1cd82bf-64e3-4b42-824b-a7075846a333,ResourceVersion:331695,Generation:1,CreationTimestamp:2020-09-09 01:14:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 0ba8fd2b-d02b-4358-ac3a-a381cb6fc6c9 0xc003f88e77 0xc003f88e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  9 01:14:52.528: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Sep  9 01:14:52.528: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8346,SelfLink:/apis/apps/v1/namespaces/deployment-8346/replicasets/test-cleanup-controller,UID:5bc050e2-eefb-41e1-b79e-ac1b07972c9a,ResourceVersion:331694,Generation:1,CreationTimestamp:2020-09-09 01:14:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 0ba8fd2b-d02b-4358-ac3a-a381cb6fc6c9 0xc003f88da7 0xc003f88da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep  9 01:14:52.542: INFO: Pod "test-cleanup-controller-j2m8v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-j2m8v,GenerateName:test-cleanup-controller-,Namespace:deployment-8346,SelfLink:/api/v1/namespaces/deployment-8346/pods/test-cleanup-controller-j2m8v,UID:8392eead-7b4f-4711-ac39-c7d11b771155,ResourceVersion:331688,Generation:0,CreationTimestamp:2020-09-09 01:14:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 5bc050e2-eefb-41e1-b79e-ac1b07972c9a 0xc003ddd977 0xc003ddd978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76dn8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76dn8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-76dn8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003ddd9f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003ddda10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 01:14:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 01:14:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 01:14:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 01:14:47 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.203,StartTime:2020-09-09 01:14:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-09 01:14:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f8a44d76d4b7f2945b5ae92042b87220ee4e15e4e2e675ac62cbe9ee64d80501}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep  9 01:14:52.542: INFO: Pod "test-cleanup-deployment-55bbcbc84c-lhn85" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-lhn85,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8346,SelfLink:/api/v1/namespaces/deployment-8346/pods/test-cleanup-deployment-55bbcbc84c-lhn85,UID:dc4b855e-882d-42f8-b7c5-6f1e2d6de167,ResourceVersion:331699,Generation:0,CreationTimestamp:2020-09-09 01:14:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c b1cd82bf-64e3-4b42-824b-a7075846a333 0xc003dddae7 0xc003dddae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76dn8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76dn8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-76dn8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003dddb70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003dddb90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 01:14:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:14:52.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8346" for this suite.
Sep  9 01:14:58.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:14:58.723: INFO: namespace deployment-8346 deletion completed in 6.117157856s

• [SLOW TEST:11.336 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:14:58.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-2c58487b-b8af-491f-873b-59e4257fd6fb
STEP: Creating a pod to test consume secrets
Sep  9 01:14:58.785: INFO: Waiting up to 5m0s for pod "pod-secrets-b8f1e343-d4ec-42fa-a082-05e7b613e24f" in namespace "secrets-9095" to be "success or failure"
Sep  9 01:14:58.797: INFO: Pod "pod-secrets-b8f1e343-d4ec-42fa-a082-05e7b613e24f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.000288ms
Sep  9 01:15:00.800: INFO: Pod "pod-secrets-b8f1e343-d4ec-42fa-a082-05e7b613e24f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015654647s
Sep  9 01:15:02.804: INFO: Pod "pod-secrets-b8f1e343-d4ec-42fa-a082-05e7b613e24f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019442876s
STEP: Saw pod success
Sep  9 01:15:02.804: INFO: Pod "pod-secrets-b8f1e343-d4ec-42fa-a082-05e7b613e24f" satisfied condition "success or failure"
Sep  9 01:15:02.807: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b8f1e343-d4ec-42fa-a082-05e7b613e24f container secret-volume-test: 
STEP: delete the pod
Sep  9 01:15:02.833: INFO: Waiting for pod pod-secrets-b8f1e343-d4ec-42fa-a082-05e7b613e24f to disappear
Sep  9 01:15:02.850: INFO: Pod pod-secrets-b8f1e343-d4ec-42fa-a082-05e7b613e24f no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:15:02.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9095" for this suite.
Sep  9 01:15:08.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:15:08.957: INFO: namespace secrets-9095 deletion completed in 6.088914918s

• [SLOW TEST:10.234 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:15:08.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3478
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep  9 01:15:08.990: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep  9 01:15:35.130: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.136:8080/dial?request=hostName&protocol=udp&host=10.244.1.135&port=8081&tries=1'] Namespace:pod-network-test-3478 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:15:35.130: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:15:35.170402       6 log.go:172] (0xc00056cd10) (0xc00039cfa0) Create stream
I0909 01:15:35.170442       6 log.go:172] (0xc00056cd10) (0xc00039cfa0) Stream added, broadcasting: 1
I0909 01:15:35.172534       6 log.go:172] (0xc00056cd10) Reply frame received for 1
I0909 01:15:35.172566       6 log.go:172] (0xc00056cd10) (0xc003adc000) Create stream
I0909 01:15:35.172579       6 log.go:172] (0xc00056cd10) (0xc003adc000) Stream added, broadcasting: 3
I0909 01:15:35.173722       6 log.go:172] (0xc00056cd10) Reply frame received for 3
I0909 01:15:35.173773       6 log.go:172] (0xc00056cd10) (0xc00039d040) Create stream
I0909 01:15:35.173797       6 log.go:172] (0xc00056cd10) (0xc00039d040) Stream added, broadcasting: 5
I0909 01:15:35.174771       6 log.go:172] (0xc00056cd10) Reply frame received for 5
I0909 01:15:35.269562       6 log.go:172] (0xc00056cd10) Data frame received for 3
I0909 01:15:35.269592       6 log.go:172] (0xc003adc000) (3) Data frame handling
I0909 01:15:35.269608       6 log.go:172] (0xc003adc000) (3) Data frame sent
I0909 01:15:35.269942       6 log.go:172] (0xc00056cd10) Data frame received for 3
I0909 01:15:35.269954       6 log.go:172] (0xc003adc000) (3) Data frame handling
I0909 01:15:35.270057       6 log.go:172] (0xc00056cd10) Data frame received for 5
I0909 01:15:35.270068       6 log.go:172] (0xc00039d040) (5) Data frame handling
I0909 01:15:35.271772       6 log.go:172] (0xc00056cd10) Data frame received for 1
I0909 01:15:35.271784       6 log.go:172] (0xc00039cfa0) (1) Data frame handling
I0909 01:15:35.271790       6 log.go:172] (0xc00039cfa0) (1) Data frame sent
I0909 01:15:35.271798       6 log.go:172] (0xc00056cd10) (0xc00039cfa0) Stream removed, broadcasting: 1
I0909 01:15:35.271807       6 log.go:172] (0xc00056cd10) Go away received
I0909 01:15:35.271966       6 log.go:172] (0xc00056cd10) (0xc00039cfa0) Stream removed, broadcasting: 1
I0909 01:15:35.272102       6 log.go:172] (0xc00056cd10) (0xc003adc000) Stream removed, broadcasting: 3
I0909 01:15:35.272140       6 log.go:172] (0xc00056cd10) (0xc00039d040) Stream removed, broadcasting: 5
Sep  9 01:15:35.272: INFO: Waiting for endpoints: map[]
Sep  9 01:15:35.275: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.136:8080/dial?request=hostName&protocol=udp&host=10.244.2.205&port=8081&tries=1'] Namespace:pod-network-test-3478 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:15:35.275: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:15:35.325220       6 log.go:172] (0xc00056d970) (0xc00039d400) Create stream
I0909 01:15:35.325260       6 log.go:172] (0xc00056d970) (0xc00039d400) Stream added, broadcasting: 1
I0909 01:15:35.327624       6 log.go:172] (0xc00056d970) Reply frame received for 1
I0909 01:15:35.327678       6 log.go:172] (0xc00056d970) (0xc002808000) Create stream
I0909 01:15:35.327693       6 log.go:172] (0xc00056d970) (0xc002808000) Stream added, broadcasting: 3
I0909 01:15:35.329120       6 log.go:172] (0xc00056d970) Reply frame received for 3
I0909 01:15:35.329161       6 log.go:172] (0xc00056d970) (0xc00039d4a0) Create stream
I0909 01:15:35.329174       6 log.go:172] (0xc00056d970) (0xc00039d4a0) Stream added, broadcasting: 5
I0909 01:15:35.330366       6 log.go:172] (0xc00056d970) Reply frame received for 5
I0909 01:15:35.408827       6 log.go:172] (0xc00056d970) Data frame received for 3
I0909 01:15:35.408852       6 log.go:172] (0xc002808000) (3) Data frame handling
I0909 01:15:35.408866       6 log.go:172] (0xc002808000) (3) Data frame sent
I0909 01:15:35.410012       6 log.go:172] (0xc00056d970) Data frame received for 3
I0909 01:15:35.410061       6 log.go:172] (0xc002808000) (3) Data frame handling
I0909 01:15:35.410087       6 log.go:172] (0xc00056d970) Data frame received for 5
I0909 01:15:35.410105       6 log.go:172] (0xc00039d4a0) (5) Data frame handling
I0909 01:15:35.411485       6 log.go:172] (0xc00056d970) Data frame received for 1
I0909 01:15:35.411496       6 log.go:172] (0xc00039d400) (1) Data frame handling
I0909 01:15:35.411501       6 log.go:172] (0xc00039d400) (1) Data frame sent
I0909 01:15:35.411510       6 log.go:172] (0xc00056d970) (0xc00039d400) Stream removed, broadcasting: 1
I0909 01:15:35.411607       6 log.go:172] (0xc00056d970) (0xc00039d400) Stream removed, broadcasting: 1
I0909 01:15:35.411634       6 log.go:172] (0xc00056d970) Go away received
I0909 01:15:35.411661       6 log.go:172] (0xc00056d970) (0xc002808000) Stream removed, broadcasting: 3
I0909 01:15:35.411677       6 log.go:172] (0xc00056d970) (0xc00039d4a0) Stream removed, broadcasting: 5
Sep  9 01:15:35.411: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:15:35.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3478" for this suite.
Sep  9 01:15:57.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:15:57.649: INFO: namespace pod-network-test-3478 deletion completed in 22.094824771s

• [SLOW TEST:48.691 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:15:57.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep  9 01:15:57.703: INFO: Waiting up to 5m0s for pod "pod-16b51dfb-7d20-4caa-967b-4d4f8362e4d2" in namespace "emptydir-1032" to be "success or failure"
Sep  9 01:15:57.719: INFO: Pod "pod-16b51dfb-7d20-4caa-967b-4d4f8362e4d2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.241641ms
Sep  9 01:15:59.723: INFO: Pod "pod-16b51dfb-7d20-4caa-967b-4d4f8362e4d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019788664s
Sep  9 01:16:01.727: INFO: Pod "pod-16b51dfb-7d20-4caa-967b-4d4f8362e4d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024138358s
STEP: Saw pod success
Sep  9 01:16:01.727: INFO: Pod "pod-16b51dfb-7d20-4caa-967b-4d4f8362e4d2" satisfied condition "success or failure"
Sep  9 01:16:01.731: INFO: Trying to get logs from node iruya-worker2 pod pod-16b51dfb-7d20-4caa-967b-4d4f8362e4d2 container test-container: 
STEP: delete the pod
Sep  9 01:16:01.750: INFO: Waiting for pod pod-16b51dfb-7d20-4caa-967b-4d4f8362e4d2 to disappear
Sep  9 01:16:01.755: INFO: Pod pod-16b51dfb-7d20-4caa-967b-4d4f8362e4d2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:16:01.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1032" for this suite.
Sep  9 01:16:07.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:16:07.864: INFO: namespace emptydir-1032 deletion completed in 6.106243543s

• [SLOW TEST:10.215 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:16:07.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 01:16:07.932: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b3fc9de-730f-4d12-9813-4fc1f072ab4f" in namespace "projected-4300" to be "success or failure"
Sep  9 01:16:07.935: INFO: Pod "downwardapi-volume-5b3fc9de-730f-4d12-9813-4fc1f072ab4f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.168331ms
Sep  9 01:16:09.990: INFO: Pod "downwardapi-volume-5b3fc9de-730f-4d12-9813-4fc1f072ab4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058515293s
Sep  9 01:16:11.994: INFO: Pod "downwardapi-volume-5b3fc9de-730f-4d12-9813-4fc1f072ab4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061863796s
STEP: Saw pod success
Sep  9 01:16:11.994: INFO: Pod "downwardapi-volume-5b3fc9de-730f-4d12-9813-4fc1f072ab4f" satisfied condition "success or failure"
Sep  9 01:16:11.997: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5b3fc9de-730f-4d12-9813-4fc1f072ab4f container client-container: 
STEP: delete the pod
Sep  9 01:16:12.036: INFO: Waiting for pod downwardapi-volume-5b3fc9de-730f-4d12-9813-4fc1f072ab4f to disappear
Sep  9 01:16:12.062: INFO: Pod downwardapi-volume-5b3fc9de-730f-4d12-9813-4fc1f072ab4f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:16:12.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4300" for this suite.
Sep  9 01:16:18.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:16:18.159: INFO: namespace projected-4300 deletion completed in 6.093854341s

• [SLOW TEST:10.295 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:16:18.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Sep  9 01:16:18.263: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5186,SelfLink:/api/v1/namespaces/watch-5186/configmaps/e2e-watch-test-watch-closed,UID:fc8d7136-92b6-49b9-9478-cb4e6519decb,ResourceVersion:332046,Generation:0,CreationTimestamp:2020-09-09 01:16:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep  9 01:16:18.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5186,SelfLink:/api/v1/namespaces/watch-5186/configmaps/e2e-watch-test-watch-closed,UID:fc8d7136-92b6-49b9-9478-cb4e6519decb,ResourceVersion:332047,Generation:0,CreationTimestamp:2020-09-09 01:16:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Sep  9 01:16:18.281: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5186,SelfLink:/api/v1/namespaces/watch-5186/configmaps/e2e-watch-test-watch-closed,UID:fc8d7136-92b6-49b9-9478-cb4e6519decb,ResourceVersion:332048,Generation:0,CreationTimestamp:2020-09-09 01:16:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep  9 01:16:18.281: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5186,SelfLink:/api/v1/namespaces/watch-5186/configmaps/e2e-watch-test-watch-closed,UID:fc8d7136-92b6-49b9-9478-cb4e6519decb,ResourceVersion:332049,Generation:0,CreationTimestamp:2020-09-09 01:16:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:16:18.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5186" for this suite.
Sep  9 01:16:24.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:16:24.381: INFO: namespace watch-5186 deletion completed in 6.084864064s

• [SLOW TEST:6.222 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:16:24.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4084
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep  9 01:16:24.417: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep  9 01:16:52.561: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.207:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4084 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:16:52.561: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:16:52.803582       6 log.go:172] (0xc001042b00) (0xc003e2b5e0) Create stream
I0909 01:16:52.803634       6 log.go:172] (0xc001042b00) (0xc003e2b5e0) Stream added, broadcasting: 1
I0909 01:16:52.805796       6 log.go:172] (0xc001042b00) Reply frame received for 1
I0909 01:16:52.805859       6 log.go:172] (0xc001042b00) (0xc0025ac5a0) Create stream
I0909 01:16:52.805876       6 log.go:172] (0xc001042b00) (0xc0025ac5a0) Stream added, broadcasting: 3
I0909 01:16:52.806856       6 log.go:172] (0xc001042b00) Reply frame received for 3
I0909 01:16:52.806885       6 log.go:172] (0xc001042b00) (0xc003649220) Create stream
I0909 01:16:52.806893       6 log.go:172] (0xc001042b00) (0xc003649220) Stream added, broadcasting: 5
I0909 01:16:52.807754       6 log.go:172] (0xc001042b00) Reply frame received for 5
I0909 01:16:52.882863       6 log.go:172] (0xc001042b00) Data frame received for 3
I0909 01:16:52.882887       6 log.go:172] (0xc0025ac5a0) (3) Data frame handling
I0909 01:16:52.882897       6 log.go:172] (0xc0025ac5a0) (3) Data frame sent
I0909 01:16:52.882914       6 log.go:172] (0xc001042b00) Data frame received for 3
I0909 01:16:52.882928       6 log.go:172] (0xc0025ac5a0) (3) Data frame handling
I0909 01:16:52.883061       6 log.go:172] (0xc001042b00) Data frame received for 5
I0909 01:16:52.883092       6 log.go:172] (0xc003649220) (5) Data frame handling
I0909 01:16:52.885043       6 log.go:172] (0xc001042b00) Data frame received for 1
I0909 01:16:52.885057       6 log.go:172] (0xc003e2b5e0) (1) Data frame handling
I0909 01:16:52.885064       6 log.go:172] (0xc003e2b5e0) (1) Data frame sent
I0909 01:16:52.885073       6 log.go:172] (0xc001042b00) (0xc003e2b5e0) Stream removed, broadcasting: 1
I0909 01:16:52.885090       6 log.go:172] (0xc001042b00) Go away received
I0909 01:16:52.885176       6 log.go:172] (0xc001042b00) (0xc003e2b5e0) Stream removed, broadcasting: 1
I0909 01:16:52.885190       6 log.go:172] (0xc001042b00) (0xc0025ac5a0) Stream removed, broadcasting: 3
I0909 01:16:52.885196       6 log.go:172] (0xc001042b00) (0xc003649220) Stream removed, broadcasting: 5
Sep  9 01:16:52.885: INFO: Found all expected endpoints: [netserver-0]
Sep  9 01:16:52.889: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.138:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4084 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep  9 01:16:52.889: INFO: >>> kubeConfig: /root/.kube/config
I0909 01:16:52.923670       6 log.go:172] (0xc0023c2fd0) (0xc0025acbe0) Create stream
I0909 01:16:52.923708       6 log.go:172] (0xc0023c2fd0) (0xc0025acbe0) Stream added, broadcasting: 1
I0909 01:16:52.925759       6 log.go:172] (0xc0023c2fd0) Reply frame received for 1
I0909 01:16:52.925802       6 log.go:172] (0xc0023c2fd0) (0xc00261b4a0) Create stream
I0909 01:16:52.925812       6 log.go:172] (0xc0023c2fd0) (0xc00261b4a0) Stream added, broadcasting: 3
I0909 01:16:52.926633       6 log.go:172] (0xc0023c2fd0) Reply frame received for 3
I0909 01:16:52.926673       6 log.go:172] (0xc0023c2fd0) (0xc003649360) Create stream
I0909 01:16:52.926687       6 log.go:172] (0xc0023c2fd0) (0xc003649360) Stream added, broadcasting: 5
I0909 01:16:52.927591       6 log.go:172] (0xc0023c2fd0) Reply frame received for 5
I0909 01:16:52.987663       6 log.go:172] (0xc0023c2fd0) Data frame received for 5
I0909 01:16:52.987710       6 log.go:172] (0xc003649360) (5) Data frame handling
I0909 01:16:52.987740       6 log.go:172] (0xc0023c2fd0) Data frame received for 3
I0909 01:16:52.987756       6 log.go:172] (0xc00261b4a0) (3) Data frame handling
I0909 01:16:52.987779       6 log.go:172] (0xc00261b4a0) (3) Data frame sent
I0909 01:16:52.987798       6 log.go:172] (0xc0023c2fd0) Data frame received for 3
I0909 01:16:52.987813       6 log.go:172] (0xc00261b4a0) (3) Data frame handling
I0909 01:16:52.989338       6 log.go:172] (0xc0023c2fd0) Data frame received for 1
I0909 01:16:52.989375       6 log.go:172] (0xc0025acbe0) (1) Data frame handling
I0909 01:16:52.989425       6 log.go:172] (0xc0025acbe0) (1) Data frame sent
I0909 01:16:52.989451       6 log.go:172] (0xc0023c2fd0) (0xc0025acbe0) Stream removed, broadcasting: 1
I0909 01:16:52.989491       6 log.go:172] (0xc0023c2fd0) Go away received
I0909 01:16:52.989583       6 log.go:172] (0xc0023c2fd0) (0xc0025acbe0) Stream removed, broadcasting: 1
I0909 01:16:52.989609       6 log.go:172] (0xc0023c2fd0) (0xc00261b4a0) Stream removed, broadcasting: 3
I0909 01:16:52.989630       6 log.go:172] (0xc0023c2fd0) (0xc003649360) Stream removed, broadcasting: 5
Sep  9 01:16:52.989: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:16:52.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4084" for this suite.
Sep  9 01:17:17.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:17:17.094: INFO: namespace pod-network-test-4084 deletion completed in 24.100329511s

• [SLOW TEST:52.713 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:17:17.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep  9 01:17:17.190: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:17:18.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3981" for this suite.
Sep  9 01:17:24.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:17:24.429: INFO: namespace custom-resource-definition-3981 deletion completed in 6.141385403s

• [SLOW TEST:7.334 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:17:24.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Sep  9 01:17:24.494: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep  9 01:17:24.516: INFO: Waiting for terminating namespaces to be deleted...
Sep  9 01:17:24.519: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Sep  9 01:17:24.525: INFO: kindnet-l8ltc from kube-system started at 2020-09-07 19:17:06 +0000 UTC (1 container statuses recorded)
Sep  9 01:17:24.525: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  9 01:17:24.525: INFO: kube-proxy-7tdlb from kube-system started at 2020-09-07 19:17:06 +0000 UTC (1 container statuses recorded)
Sep  9 01:17:24.525: INFO: 	Container kube-proxy ready: true, restart count 0
Sep  9 01:17:24.525: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Sep  9 01:17:24.529: INFO: kube-proxy-hwdzp from kube-system started at 2020-09-07 19:16:55 +0000 UTC (1 container statuses recorded)
Sep  9 01:17:24.529: INFO: 	Container kube-proxy ready: true, restart count 0
Sep  9 01:17:24.529: INFO: kindnet-mnblj from kube-system started at 2020-09-07 19:16:56 +0000 UTC (1 container statuses recorded)
Sep  9 01:17:24.529: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep  9 01:17:24.529: INFO: coredns-5d4dd4b4db-25mzm from kube-system started at 2020-09-07 19:17:27 +0000 UTC (1 container statuses recorded)
Sep  9 01:17:24.529: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Sep  9 01:17:24.614: INFO: Pod coredns-5d4dd4b4db-25mzm requesting resource cpu=100m on Node iruya-worker2
Sep  9 01:17:24.614: INFO: Pod kindnet-l8ltc requesting resource cpu=100m on Node iruya-worker
Sep  9 01:17:24.614: INFO: Pod kindnet-mnblj requesting resource cpu=100m on Node iruya-worker2
Sep  9 01:17:24.614: INFO: Pod kube-proxy-7tdlb requesting resource cpu=0m on Node iruya-worker
Sep  9 01:17:24.614: INFO: Pod kube-proxy-hwdzp requesting resource cpu=0m on Node iruya-worker2
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6819af4d-8b50-43aa-ae93-199bd1a514f0.1632f8ae2ce3b861], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8755/filler-pod-6819af4d-8b50-43aa-ae93-199bd1a514f0 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6819af4d-8b50-43aa-ae93-199bd1a514f0.1632f8ae77eabb5f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6819af4d-8b50-43aa-ae93-199bd1a514f0.1632f8aede7cfc80], Reason = [Created], Message = [Created container filler-pod-6819af4d-8b50-43aa-ae93-199bd1a514f0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6819af4d-8b50-43aa-ae93-199bd1a514f0.1632f8aef72cf551], Reason = [Started], Message = [Started container filler-pod-6819af4d-8b50-43aa-ae93-199bd1a514f0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8c3dcd65-1c86-49da-83e7-0b74f074c25b.1632f8ae2ce4094b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8755/filler-pod-8c3dcd65-1c86-49da-83e7-0b74f074c25b to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8c3dcd65-1c86-49da-83e7-0b74f074c25b.1632f8aecf05be91], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8c3dcd65-1c86-49da-83e7-0b74f074c25b.1632f8af1257dd93], Reason = [Created], Message = [Created container filler-pod-8c3dcd65-1c86-49da-83e7-0b74f074c25b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8c3dcd65-1c86-49da-83e7-0b74f074c25b.1632f8af1ffa1789], Reason = [Started], Message = [Started container filler-pod-8c3dcd65-1c86-49da-83e7-0b74f074c25b]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1632f8af93acfee6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:17:31.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8755" for this suite.
Sep  9 01:17:37.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:17:37.822: INFO: namespace sched-pred-8755 deletion completed in 6.089851796s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:13.391 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:17:37.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Sep  9 01:17:38.228: INFO: Waiting up to 5m0s for pod "pod-71e54835-5474-4ab8-9115-03141eb7a3f2" in namespace "emptydir-2909" to be "success or failure"
Sep  9 01:17:38.273: INFO: Pod "pod-71e54835-5474-4ab8-9115-03141eb7a3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 44.519559ms
Sep  9 01:17:40.276: INFO: Pod "pod-71e54835-5474-4ab8-9115-03141eb7a3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047951662s
Sep  9 01:17:42.280: INFO: Pod "pod-71e54835-5474-4ab8-9115-03141eb7a3f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051808363s
STEP: Saw pod success
Sep  9 01:17:42.280: INFO: Pod "pod-71e54835-5474-4ab8-9115-03141eb7a3f2" satisfied condition "success or failure"
Sep  9 01:17:42.282: INFO: Trying to get logs from node iruya-worker2 pod pod-71e54835-5474-4ab8-9115-03141eb7a3f2 container test-container: 
STEP: delete the pod
Sep  9 01:17:42.299: INFO: Waiting for pod pod-71e54835-5474-4ab8-9115-03141eb7a3f2 to disappear
Sep  9 01:17:42.303: INFO: Pod pod-71e54835-5474-4ab8-9115-03141eb7a3f2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:17:42.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2909" for this suite.
Sep  9 01:17:48.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:17:48.502: INFO: namespace emptydir-2909 deletion completed in 6.195723574s

• [SLOW TEST:10.680 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:17:48.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-24091eac-c78d-45fc-9975-d87490c882f0
STEP: Creating a pod to test consume secrets
Sep  9 01:17:48.637: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0bc1a07-8716-4da6-968f-6e8494288642" in namespace "projected-9628" to be "success or failure"
Sep  9 01:17:48.649: INFO: Pod "pod-projected-secrets-a0bc1a07-8716-4da6-968f-6e8494288642": Phase="Pending", Reason="", readiness=false. Elapsed: 12.202671ms
Sep  9 01:17:50.653: INFO: Pod "pod-projected-secrets-a0bc1a07-8716-4da6-968f-6e8494288642": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016150174s
Sep  9 01:17:52.657: INFO: Pod "pod-projected-secrets-a0bc1a07-8716-4da6-968f-6e8494288642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020180292s
STEP: Saw pod success
Sep  9 01:17:52.657: INFO: Pod "pod-projected-secrets-a0bc1a07-8716-4da6-968f-6e8494288642" satisfied condition "success or failure"
Sep  9 01:17:52.661: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-a0bc1a07-8716-4da6-968f-6e8494288642 container projected-secret-volume-test: 
STEP: delete the pod
Sep  9 01:17:52.697: INFO: Waiting for pod pod-projected-secrets-a0bc1a07-8716-4da6-968f-6e8494288642 to disappear
Sep  9 01:17:52.746: INFO: Pod pod-projected-secrets-a0bc1a07-8716-4da6-968f-6e8494288642 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:17:52.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9628" for this suite.
Sep  9 01:17:58.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:17:58.840: INFO: namespace projected-9628 deletion completed in 6.089006568s

• [SLOW TEST:10.337 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:17:58.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  9 01:17:58.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-525'
Sep  9 01:17:59.029: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep  9 01:17:59.029: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Sep  9 01:17:59.050: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-45hzl]
Sep  9 01:17:59.050: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-45hzl" in namespace "kubectl-525" to be "running and ready"
Sep  9 01:17:59.076: INFO: Pod "e2e-test-nginx-rc-45hzl": Phase="Pending", Reason="", readiness=false. Elapsed: 25.608119ms
Sep  9 01:18:01.088: INFO: Pod "e2e-test-nginx-rc-45hzl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037538259s
Sep  9 01:18:03.092: INFO: Pod "e2e-test-nginx-rc-45hzl": Phase="Running", Reason="", readiness=true. Elapsed: 4.041667079s
Sep  9 01:18:03.092: INFO: Pod "e2e-test-nginx-rc-45hzl" satisfied condition "running and ready"
Sep  9 01:18:03.092: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-45hzl]
Sep  9 01:18:03.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-525'
Sep  9 01:18:03.215: INFO: stderr: ""
Sep  9 01:18:03.215: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Sep  9 01:18:03.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-525'
Sep  9 01:18:03.318: INFO: stderr: ""
Sep  9 01:18:03.318: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:18:03.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-525" for this suite.
Sep  9 01:18:25.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:18:25.406: INFO: namespace kubectl-525 deletion completed in 22.085153645s

• [SLOW TEST:26.566 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:18:25.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:18:29.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1412" for this suite.
Sep  9 01:18:35.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:18:35.580: INFO: namespace kubelet-test-1412 deletion completed in 6.099099427s

• [SLOW TEST:10.173 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:18:35.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-3c1ffcae-5cb2-4ef4-9644-f2898501086e
STEP: Creating a pod to test consume configMaps
Sep  9 01:18:35.706: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0a81e93f-7e9c-4c4e-bd3b-8f6ac2c643c5" in namespace "projected-9405" to be "success or failure"
Sep  9 01:18:35.735: INFO: Pod "pod-projected-configmaps-0a81e93f-7e9c-4c4e-bd3b-8f6ac2c643c5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.825406ms
Sep  9 01:18:37.739: INFO: Pod "pod-projected-configmaps-0a81e93f-7e9c-4c4e-bd3b-8f6ac2c643c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032647594s
Sep  9 01:18:39.743: INFO: Pod "pod-projected-configmaps-0a81e93f-7e9c-4c4e-bd3b-8f6ac2c643c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036244017s
STEP: Saw pod success
Sep  9 01:18:39.743: INFO: Pod "pod-projected-configmaps-0a81e93f-7e9c-4c4e-bd3b-8f6ac2c643c5" satisfied condition "success or failure"
Sep  9 01:18:39.746: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0a81e93f-7e9c-4c4e-bd3b-8f6ac2c643c5 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 01:18:39.788: INFO: Waiting for pod pod-projected-configmaps-0a81e93f-7e9c-4c4e-bd3b-8f6ac2c643c5 to disappear
Sep  9 01:18:39.805: INFO: Pod pod-projected-configmaps-0a81e93f-7e9c-4c4e-bd3b-8f6ac2c643c5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:18:39.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9405" for this suite.
Sep  9 01:18:45.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:18:45.907: INFO: namespace projected-9405 deletion completed in 6.098008407s

• [SLOW TEST:10.326 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:18:45.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-9a44a431-ff7b-449c-b34c-1f3734610866
STEP: Creating configMap with name cm-test-opt-upd-b0b6687b-4ac6-429e-81ab-5874fed12982
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9a44a431-ff7b-449c-b34c-1f3734610866
STEP: Updating configmap cm-test-opt-upd-b0b6687b-4ac6-429e-81ab-5874fed12982
STEP: Creating configMap with name cm-test-opt-create-42112d9b-f081-4657-9abc-f50a25a920c7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:20:22.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8117" for this suite.
Sep  9 01:20:46.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:20:46.597: INFO: namespace configmap-8117 deletion completed in 24.08983899s

• [SLOW TEST:120.690 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:20:46.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:20:50.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9799" for this suite.
Sep  9 01:20:56.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:20:56.899: INFO: namespace emptydir-wrapper-9799 deletion completed in 6.167976341s

• [SLOW TEST:10.301 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:20:56.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 01:20:56.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3bab5eb8-d2ae-4de9-907e-6399c32f0182" in namespace "downward-api-7976" to be "success or failure"
Sep  9 01:20:56.961: INFO: Pod "downwardapi-volume-3bab5eb8-d2ae-4de9-907e-6399c32f0182": Phase="Pending", Reason="", readiness=false. Elapsed: 26.10011ms
Sep  9 01:20:58.985: INFO: Pod "downwardapi-volume-3bab5eb8-d2ae-4de9-907e-6399c32f0182": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049974257s
Sep  9 01:21:00.989: INFO: Pod "downwardapi-volume-3bab5eb8-d2ae-4de9-907e-6399c32f0182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053641807s
STEP: Saw pod success
Sep  9 01:21:00.989: INFO: Pod "downwardapi-volume-3bab5eb8-d2ae-4de9-907e-6399c32f0182" satisfied condition "success or failure"
Sep  9 01:21:00.991: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3bab5eb8-d2ae-4de9-907e-6399c32f0182 container client-container: 
STEP: delete the pod
Sep  9 01:21:01.114: INFO: Waiting for pod downwardapi-volume-3bab5eb8-d2ae-4de9-907e-6399c32f0182 to disappear
Sep  9 01:21:01.270: INFO: Pod downwardapi-volume-3bab5eb8-d2ae-4de9-907e-6399c32f0182 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:21:01.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7976" for this suite.
Sep  9 01:21:07.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:21:07.413: INFO: namespace downward-api-7976 deletion completed in 6.138723926s

• [SLOW TEST:10.514 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:21:07.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-8151f905-29cf-4509-90d2-b57e7602d8e0
STEP: Creating secret with name secret-projected-all-test-volume-853c2a8f-c0ab-40ab-85d2-dea7145f1d27
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep  9 01:21:07.521: INFO: Waiting up to 5m0s for pod "projected-volume-3f3b5957-af46-4f95-9ebd-65c63fbb1d8c" in namespace "projected-4784" to be "success or failure"
Sep  9 01:21:07.536: INFO: Pod "projected-volume-3f3b5957-af46-4f95-9ebd-65c63fbb1d8c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.865712ms
Sep  9 01:21:09.546: INFO: Pod "projected-volume-3f3b5957-af46-4f95-9ebd-65c63fbb1d8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025063335s
Sep  9 01:21:11.550: INFO: Pod "projected-volume-3f3b5957-af46-4f95-9ebd-65c63fbb1d8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02954293s
STEP: Saw pod success
Sep  9 01:21:11.551: INFO: Pod "projected-volume-3f3b5957-af46-4f95-9ebd-65c63fbb1d8c" satisfied condition "success or failure"
Sep  9 01:21:11.553: INFO: Trying to get logs from node iruya-worker pod projected-volume-3f3b5957-af46-4f95-9ebd-65c63fbb1d8c container projected-all-volume-test: 
STEP: delete the pod
Sep  9 01:21:11.573: INFO: Waiting for pod projected-volume-3f3b5957-af46-4f95-9ebd-65c63fbb1d8c to disappear
Sep  9 01:21:11.577: INFO: Pod projected-volume-3f3b5957-af46-4f95-9ebd-65c63fbb1d8c no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:21:11.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4784" for this suite.
Sep  9 01:21:17.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:21:17.687: INFO: namespace projected-4784 deletion completed in 6.106785735s

• [SLOW TEST:10.274 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:21:17.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  9 01:21:17.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3257'
Sep  9 01:21:20.596: INFO: stderr: ""
Sep  9 01:21:20.596: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Sep  9 01:21:25.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3257 -o json'
Sep  9 01:21:25.733: INFO: stderr: ""
Sep  9 01:21:25.733: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-09-09T01:21:20Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-3257\",\n        \"resourceVersion\": \"333042\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3257/pods/e2e-test-nginx-pod\",\n        \"uid\": \"78ab832b-568f-42ca-b458-d9829f3c63ee\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-dzhj5\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-dzhj5\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-dzhj5\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-09T01:21:20Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-09T01:21:23Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-09T01:21:23Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-09T01:21:20Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://7978430fa2fa34e3e733a2acef4b27d6caf0ce4fcc06fcf1d8ff73b03490e73a\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-09-09T01:21:23Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.8\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.214\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-09-09T01:21:20Z\"\n    }\n}\n"
STEP: replace the image in the pod
Sep  9 01:21:25.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3257'
Sep  9 01:21:26.034: INFO: stderr: ""
Sep  9 01:21:26.034: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Sep  9 01:21:26.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3257'
Sep  9 01:21:33.640: INFO: stderr: ""
Sep  9 01:21:33.641: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:21:33.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3257" for this suite.
Sep  9 01:21:39.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:21:39.730: INFO: namespace kubectl-3257 deletion completed in 6.086382173s

• [SLOW TEST:22.042 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:21:39.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-1e3b8a5c-4b6f-4cd1-9ac0-9e73ca2064cc
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:21:45.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4800" for this suite.
Sep  9 01:22:07.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:22:07.942: INFO: namespace configmap-4800 deletion completed in 22.110824426s

• [SLOW TEST:28.211 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:22:07.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Sep  9 01:22:12.525: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8737 pod-service-account-5bc61488-6b04-46c0-b1c7-69d4a9be78ec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Sep  9 01:22:12.709: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8737 pod-service-account-5bc61488-6b04-46c0-b1c7-69d4a9be78ec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Sep  9 01:22:12.912: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8737 pod-service-account-5bc61488-6b04-46c0-b1c7-69d4a9be78ec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:22:13.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8737" for this suite.
Sep  9 01:22:19.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:22:19.224: INFO: namespace svcaccounts-8737 deletion completed in 6.088765161s

• [SLOW TEST:11.281 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:22:19.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-c6q5
STEP: Creating a pod to test atomic-volume-subpath
Sep  9 01:22:19.307: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-c6q5" in namespace "subpath-3696" to be "success or failure"
Sep  9 01:22:19.348: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Pending", Reason="", readiness=false. Elapsed: 41.195845ms
Sep  9 01:22:21.352: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044840684s
Sep  9 01:22:23.355: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Running", Reason="", readiness=true. Elapsed: 4.048274131s
Sep  9 01:22:25.359: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Running", Reason="", readiness=true. Elapsed: 6.052103645s
Sep  9 01:22:27.362: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Running", Reason="", readiness=true. Elapsed: 8.055485914s
Sep  9 01:22:29.367: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Running", Reason="", readiness=true. Elapsed: 10.059686009s
Sep  9 01:22:31.371: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Running", Reason="", readiness=true. Elapsed: 12.063749749s
Sep  9 01:22:33.375: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Running", Reason="", readiness=true. Elapsed: 14.068142427s
Sep  9 01:22:35.379: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Running", Reason="", readiness=true. Elapsed: 16.07227956s
Sep  9 01:22:37.384: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Running", Reason="", readiness=true. Elapsed: 18.076707434s
Sep  9 01:22:39.388: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Running", Reason="", readiness=true. Elapsed: 20.080780101s
Sep  9 01:22:41.391: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Running", Reason="", readiness=true. Elapsed: 22.084256842s
Sep  9 01:22:43.415: INFO: Pod "pod-subpath-test-secret-c6q5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.107587644s
STEP: Saw pod success
Sep  9 01:22:43.415: INFO: Pod "pod-subpath-test-secret-c6q5" satisfied condition "success or failure"
Sep  9 01:22:43.417: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-c6q5 container test-container-subpath-secret-c6q5: 
STEP: delete the pod
Sep  9 01:22:43.436: INFO: Waiting for pod pod-subpath-test-secret-c6q5 to disappear
Sep  9 01:22:43.462: INFO: Pod pod-subpath-test-secret-c6q5 no longer exists
STEP: Deleting pod pod-subpath-test-secret-c6q5
Sep  9 01:22:43.462: INFO: Deleting pod "pod-subpath-test-secret-c6q5" in namespace "subpath-3696"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:22:43.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3696" for this suite.
Sep  9 01:22:49.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:22:49.623: INFO: namespace subpath-3696 deletion completed in 6.145001265s

• [SLOW TEST:30.399 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:22:49.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-156f75a2-0855-4139-81ce-53f04abea088
STEP: Creating a pod to test consume secrets
Sep  9 01:22:49.683: INFO: Waiting up to 5m0s for pod "pod-secrets-3fc96951-dc06-4d46-af91-fe49f4bfe28c" in namespace "secrets-3762" to be "success or failure"
Sep  9 01:22:49.720: INFO: Pod "pod-secrets-3fc96951-dc06-4d46-af91-fe49f4bfe28c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.751813ms
Sep  9 01:22:51.724: INFO: Pod "pod-secrets-3fc96951-dc06-4d46-af91-fe49f4bfe28c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040921665s
Sep  9 01:22:53.786: INFO: Pod "pod-secrets-3fc96951-dc06-4d46-af91-fe49f4bfe28c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102748799s
STEP: Saw pod success
Sep  9 01:22:53.786: INFO: Pod "pod-secrets-3fc96951-dc06-4d46-af91-fe49f4bfe28c" satisfied condition "success or failure"
Sep  9 01:22:53.789: INFO: Trying to get logs from node iruya-worker pod pod-secrets-3fc96951-dc06-4d46-af91-fe49f4bfe28c container secret-volume-test: 
STEP: delete the pod
Sep  9 01:22:53.810: INFO: Waiting for pod pod-secrets-3fc96951-dc06-4d46-af91-fe49f4bfe28c to disappear
Sep  9 01:22:53.814: INFO: Pod pod-secrets-3fc96951-dc06-4d46-af91-fe49f4bfe28c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:22:53.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3762" for this suite.
Sep  9 01:22:59.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:22:59.941: INFO: namespace secrets-3762 deletion completed in 6.124253934s

• [SLOW TEST:10.318 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:22:59.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Sep  9 01:23:00.044: INFO: Waiting up to 5m0s for pod "client-containers-63473446-eb4e-4907-bb6b-12a4c5e54e63" in namespace "containers-8367" to be "success or failure"
Sep  9 01:23:00.048: INFO: Pod "client-containers-63473446-eb4e-4907-bb6b-12a4c5e54e63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224175ms
Sep  9 01:23:02.052: INFO: Pod "client-containers-63473446-eb4e-4907-bb6b-12a4c5e54e63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008642869s
Sep  9 01:23:04.056: INFO: Pod "client-containers-63473446-eb4e-4907-bb6b-12a4c5e54e63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012366052s
STEP: Saw pod success
Sep  9 01:23:04.056: INFO: Pod "client-containers-63473446-eb4e-4907-bb6b-12a4c5e54e63" satisfied condition "success or failure"
Sep  9 01:23:04.059: INFO: Trying to get logs from node iruya-worker pod client-containers-63473446-eb4e-4907-bb6b-12a4c5e54e63 container test-container: 
STEP: delete the pod
Sep  9 01:23:04.123: INFO: Waiting for pod client-containers-63473446-eb4e-4907-bb6b-12a4c5e54e63 to disappear
Sep  9 01:23:04.132: INFO: Pod client-containers-63473446-eb4e-4907-bb6b-12a4c5e54e63 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:23:04.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8367" for this suite.
Sep  9 01:23:10.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:23:10.219: INFO: namespace containers-8367 deletion completed in 6.08401442s

• [SLOW TEST:10.277 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep  9 01:23:10.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep  9 01:23:10.320: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20dfd9fc-f79e-4b8b-97db-8e8a4d6378ae" in namespace "downward-api-1581" to be "success or failure"
Sep  9 01:23:10.343: INFO: Pod "downwardapi-volume-20dfd9fc-f79e-4b8b-97db-8e8a4d6378ae": Phase="Pending", Reason="", readiness=false. Elapsed: 22.331792ms
Sep  9 01:23:12.347: INFO: Pod "downwardapi-volume-20dfd9fc-f79e-4b8b-97db-8e8a4d6378ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026761776s
Sep  9 01:23:14.351: INFO: Pod "downwardapi-volume-20dfd9fc-f79e-4b8b-97db-8e8a4d6378ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030493171s
STEP: Saw pod success
Sep  9 01:23:14.351: INFO: Pod "downwardapi-volume-20dfd9fc-f79e-4b8b-97db-8e8a4d6378ae" satisfied condition "success or failure"
Sep  9 01:23:14.353: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-20dfd9fc-f79e-4b8b-97db-8e8a4d6378ae container client-container: 
STEP: delete the pod
Sep  9 01:23:14.371: INFO: Waiting for pod downwardapi-volume-20dfd9fc-f79e-4b8b-97db-8e8a4d6378ae to disappear
Sep  9 01:23:14.418: INFO: Pod downwardapi-volume-20dfd9fc-f79e-4b8b-97db-8e8a4d6378ae no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep  9 01:23:14.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1581" for this suite.
Sep  9 01:23:20.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 01:23:20.573: INFO: namespace downward-api-1581 deletion completed in 6.148850604s

• [SLOW TEST:10.354 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSep  9 01:23:20.573: INFO: Running AfterSuite actions on all nodes
Sep  9 01:23:20.573: INFO: Running AfterSuite actions on node 1
Sep  9 01:23:20.573: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6142.813 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS