I0918 02:21:01.313659 7 e2e.go:243] Starting e2e run "a387e887-f057-4e43-b89b-439f6701652b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1600395648 - Will randomize all specs Will run 215 of 4413 specs Sep 18 02:21:02.670: INFO: >>> kubeConfig: /root/.kube/config Sep 18 02:21:02.721: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 18 02:21:02.923: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 18 02:21:03.069: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 18 02:21:03.069: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 18 02:21:03.069: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 18 02:21:03.109: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 18 02:21:03.109: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 18 02:21:03.109: INFO: e2e test version: v1.15.12 Sep 18 02:21:03.113: INFO: kube-apiserver version: v1.15.11 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:21:03.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers Sep 18 02:21:03.209: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Sep 18 02:21:03.255: INFO: Waiting up to 5m0s for pod "client-containers-42ca020f-484f-4c37-ab68-06d9c0ea64da" in namespace "containers-9758" to be "success or failure" Sep 18 02:21:03.266: INFO: Pod "client-containers-42ca020f-484f-4c37-ab68-06d9c0ea64da": Phase="Pending", Reason="", readiness=false. Elapsed: 10.583122ms Sep 18 02:21:05.277: INFO: Pod "client-containers-42ca020f-484f-4c37-ab68-06d9c0ea64da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021083565s Sep 18 02:21:07.283: INFO: Pod "client-containers-42ca020f-484f-4c37-ab68-06d9c0ea64da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027733927s Sep 18 02:21:09.293: INFO: Pod "client-containers-42ca020f-484f-4c37-ab68-06d9c0ea64da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037383653s STEP: Saw pod success Sep 18 02:21:09.293: INFO: Pod "client-containers-42ca020f-484f-4c37-ab68-06d9c0ea64da" satisfied condition "success or failure" Sep 18 02:21:09.299: INFO: Trying to get logs from node iruya-worker pod client-containers-42ca020f-484f-4c37-ab68-06d9c0ea64da container test-container: STEP: delete the pod Sep 18 02:21:09.343: INFO: Waiting for pod client-containers-42ca020f-484f-4c37-ab68-06d9c0ea64da to disappear Sep 18 02:21:09.356: INFO: Pod client-containers-42ca020f-484f-4c37-ab68-06d9c0ea64da no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:21:09.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9758" for this suite. Sep 18 02:21:15.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:21:15.620: INFO: namespace containers-9758 deletion completed in 6.23955104s • [SLOW TEST:12.503 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:21:15.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b7026490-2833-4b39-8ed1-44131d6a90c2 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b7026490-2833-4b39-8ed1-44131d6a90c2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:21:23.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4977" for this suite. Sep 18 02:21:45.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:21:46.140: INFO: namespace projected-4977 deletion completed in 22.17516157s • [SLOW TEST:30.514 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:21:46.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Sep 18 02:21:46.244: INFO: Waiting up to 5m0s for pod "downward-api-d5c5c86e-a58a-4b89-822d-aa45d98e59aa" in namespace "downward-api-5755" to be "success or failure" Sep 18 02:21:46.268: INFO: Pod "downward-api-d5c5c86e-a58a-4b89-822d-aa45d98e59aa": Phase="Pending", Reason="", readiness=false. Elapsed: 23.385044ms Sep 18 02:21:48.422: INFO: Pod "downward-api-d5c5c86e-a58a-4b89-822d-aa45d98e59aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177134369s Sep 18 02:21:50.428: INFO: Pod "downward-api-d5c5c86e-a58a-4b89-822d-aa45d98e59aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183439997s Sep 18 02:21:52.436: INFO: Pod "downward-api-d5c5c86e-a58a-4b89-822d-aa45d98e59aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190930802s STEP: Saw pod success Sep 18 02:21:52.436: INFO: Pod "downward-api-d5c5c86e-a58a-4b89-822d-aa45d98e59aa" satisfied condition "success or failure" Sep 18 02:21:52.441: INFO: Trying to get logs from node iruya-worker2 pod downward-api-d5c5c86e-a58a-4b89-822d-aa45d98e59aa container dapi-container: STEP: delete the pod Sep 18 02:21:52.495: INFO: Waiting for pod downward-api-d5c5c86e-a58a-4b89-822d-aa45d98e59aa to disappear Sep 18 02:21:52.507: INFO: Pod downward-api-d5c5c86e-a58a-4b89-822d-aa45d98e59aa no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:21:52.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5755" for this suite. Sep 18 02:21:58.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:21:58.713: INFO: namespace downward-api-5755 deletion completed in 6.198827748s • [SLOW TEST:12.572 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:21:58.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Sep 18 02:22:02.941: INFO: Waiting up to 5m0s for pod "client-envvars-c4989821-361a-492a-9979-2b1e4146ae47" in namespace "pods-3705" to be "success or failure" Sep 18 02:22:02.970: INFO: Pod "client-envvars-c4989821-361a-492a-9979-2b1e4146ae47": Phase="Pending", Reason="", readiness=false. Elapsed: 28.697108ms Sep 18 02:22:04.975: INFO: Pod "client-envvars-c4989821-361a-492a-9979-2b1e4146ae47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033634306s Sep 18 02:22:06.981: INFO: Pod "client-envvars-c4989821-361a-492a-9979-2b1e4146ae47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040229261s STEP: Saw pod success Sep 18 02:22:06.982: INFO: Pod "client-envvars-c4989821-361a-492a-9979-2b1e4146ae47" satisfied condition "success or failure" Sep 18 02:22:06.986: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-c4989821-361a-492a-9979-2b1e4146ae47 container env3cont: STEP: delete the pod Sep 18 02:22:07.065: INFO: Waiting for pod client-envvars-c4989821-361a-492a-9979-2b1e4146ae47 to disappear Sep 18 02:22:07.101: INFO: Pod client-envvars-c4989821-361a-492a-9979-2b1e4146ae47 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:22:07.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3705" for this suite. Sep 18 02:22:45.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:22:45.269: INFO: namespace pods-3705 deletion completed in 38.160110939s • [SLOW TEST:46.555 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:22:45.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:22:52.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2302" for this suite. Sep 18 02:22:58.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:22:58.748: INFO: namespace namespaces-2302 deletion completed in 6.149400248s STEP: Destroying namespace "nsdeletetest-9998" for this suite. Sep 18 02:22:58.751: INFO: Namespace nsdeletetest-9998 was already deleted STEP: Destroying namespace "nsdeletetest-9880" for this suite. Sep 18 02:23:04.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:23:04.905: INFO: namespace nsdeletetest-9880 deletion completed in 6.153375469s • [SLOW TEST:19.633 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:23:04.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 18 02:23:09.198: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:23:09.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3689" for this suite. Sep 18 02:23:15.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:23:15.403: INFO: namespace container-runtime-3689 deletion completed in 6.152663875s • [SLOW TEST:10.497 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:23:15.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Sep 18 02:23:15.523: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:23:24.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8655" for this suite. Sep 18 02:23:30.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:23:30.412: INFO: namespace init-container-8655 deletion completed in 6.189848147s • [SLOW TEST:15.005 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:23:30.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-93143aa6-619f-442a-9fee-5ab648cb8b31 STEP: Creating a pod to test consume configMaps Sep 18 02:23:30.517: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a246905-b084-4c85-8c4d-eb2d780cced7" in namespace "configmap-3291" to be "success or failure" Sep 18 02:23:30.531: INFO: Pod "pod-configmaps-0a246905-b084-4c85-8c4d-eb2d780cced7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.559041ms Sep 18 02:23:32.614: INFO: Pod "pod-configmaps-0a246905-b084-4c85-8c4d-eb2d780cced7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09710466s Sep 18 02:23:34.697: INFO: Pod "pod-configmaps-0a246905-b084-4c85-8c4d-eb2d780cced7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180304674s STEP: Saw pod success Sep 18 02:23:34.698: INFO: Pod "pod-configmaps-0a246905-b084-4c85-8c4d-eb2d780cced7" satisfied condition "success or failure" Sep 18 02:23:34.761: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-0a246905-b084-4c85-8c4d-eb2d780cced7 container configmap-volume-test: STEP: delete the pod Sep 18 02:23:34.836: INFO: Waiting for pod pod-configmaps-0a246905-b084-4c85-8c4d-eb2d780cced7 to disappear Sep 18 02:23:34.850: INFO: Pod pod-configmaps-0a246905-b084-4c85-8c4d-eb2d780cced7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:23:34.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3291" for this suite. Sep 18 02:23:40.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:23:41.043: INFO: namespace configmap-3291 deletion completed in 6.186870497s • [SLOW TEST:10.628 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:23:41.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9748 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 18 02:23:41.103: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 18 02:24:11.534: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.229:8080/dial?request=hostName&protocol=udp&host=10.244.1.198&port=8081&tries=1'] Namespace:pod-network-test-9748 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 18 02:24:11.534: INFO: >>> kubeConfig: /root/.kube/config I0918 02:24:11.653923 7 log.go:172] (0x8756540) (0x87565b0) Create stream I0918 02:24:11.654825 7 log.go:172] (0x8756540) (0x87565b0) Stream added, broadcasting: 1 I0918 02:24:11.674245 7 log.go:172] (0x8756540) Reply frame received for 1 I0918 02:24:11.674913 7 log.go:172] (0x8756540) (0x8756620) Create stream I0918 02:24:11.675005 7 log.go:172] (0x8756540) (0x8756620) Stream added, broadcasting: 3 I0918 02:24:11.677033 7 log.go:172] (0x8756540) Reply frame received for 3 I0918 02:24:11.677300 7 log.go:172] (0x8756540) (0x804a620) Create stream I0918 02:24:11.677366 7 log.go:172] (0x8756540) (0x804a620) Stream added, broadcasting: 5 I0918 02:24:11.678469 7 log.go:172] (0x8756540) Reply frame received for 5 I0918 02:24:11.739528 7 log.go:172] (0x8756540) Data frame received for 3 I0918 02:24:11.739779 7 log.go:172] (0x8756620) (3) Data frame handling I0918 02:24:11.740007 7 log.go:172] (0x8756540) Data frame received for 5 I0918 02:24:11.740216 7 log.go:172] (0x804a620) (5) Data frame handling I0918 02:24:11.740435 7 log.go:172] (0x8756620) (3) Data frame sent I0918 02:24:11.740941 7 log.go:172] (0x8756540) Data frame received for 3 I0918 02:24:11.741127 7 log.go:172] (0x8756620) (3) Data frame handling I0918 02:24:11.741507 7 log.go:172] (0x8756540) Data frame received for 1 I0918 02:24:11.741624 7 log.go:172] (0x87565b0) (1) Data frame handling I0918 02:24:11.741779 7 log.go:172] (0x87565b0) (1) Data frame sent I0918 02:24:11.742524 7 log.go:172] (0x8756540) (0x87565b0) Stream removed, broadcasting: 1 I0918 02:24:11.745841 7 log.go:172] (0x8756540) Go away received I0918 02:24:11.746974 7 log.go:172] (0x8756540) (0x87565b0) Stream removed, broadcasting: 1 I0918 02:24:11.747446 7 log.go:172] (0x8756540) (0x8756620) Stream removed, broadcasting: 3 I0918 02:24:11.748019 7 log.go:172] (0x8756540) (0x804a620) Stream removed, broadcasting: 5 Sep 18 02:24:11.749: INFO: Waiting for endpoints: map[] Sep 18 02:24:11.755: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.229:8080/dial?request=hostName&protocol=udp&host=10.244.2.227&port=8081&tries=1'] Namespace:pod-network-test-9748 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 18 02:24:11.755: INFO: >>> kubeConfig: /root/.kube/config I0918 02:24:11.856446 7 log.go:172] (0x7b9a5b0) (0x7b9a690) Create stream I0918 02:24:11.856657 7 log.go:172] (0x7b9a5b0) (0x7b9a690) Stream added, broadcasting: 1 I0918 02:24:11.860005 7 log.go:172] (0x7b9a5b0) Reply frame received for 1 I0918 02:24:11.860258 7 log.go:172] (0x7b9a5b0) (0x77ee070) Create stream I0918 02:24:11.860327 7 log.go:172] (0x7b9a5b0) (0x77ee070) Stream added, broadcasting: 3 I0918 02:24:11.861678 7 log.go:172] (0x7b9a5b0) Reply frame received for 3 I0918 02:24:11.861863 7 log.go:172] (0x7b9a5b0) (0x7b9a770) Create stream I0918 02:24:11.861969 7 log.go:172] (0x7b9a5b0) (0x7b9a770) Stream added, broadcasting: 5 I0918 02:24:11.863117 7 log.go:172] (0x7b9a5b0) Reply frame received for 5 I0918 02:24:11.920976 7 log.go:172] (0x7b9a5b0) Data frame received for 3 I0918 02:24:11.921145 7 log.go:172] (0x77ee070) (3) Data frame handling I0918 02:24:11.921229 7 log.go:172] (0x77ee070) (3) Data frame sent I0918 02:24:11.921300 7 log.go:172] (0x7b9a5b0) Data frame received for 3 I0918 02:24:11.921356 7 log.go:172] (0x77ee070) (3) Data frame handling I0918 02:24:11.921511 7 log.go:172] (0x7b9a5b0) Data frame received for 5 I0918 02:24:11.921701 7 log.go:172] (0x7b9a770) (5) Data frame handling I0918 02:24:11.921905 7 log.go:172] (0x7b9a5b0) Data frame received for 1 I0918 02:24:11.922003 7 log.go:172] (0x7b9a690) (1) Data frame handling I0918 02:24:11.922099 7 log.go:172] (0x7b9a690) (1) Data frame sent I0918 02:24:11.922195 7 log.go:172] (0x7b9a5b0) (0x7b9a690) Stream removed, broadcasting: 1 I0918 02:24:11.922301 7 log.go:172] (0x7b9a5b0) Go away received I0918 02:24:11.922882 7 log.go:172] (0x7b9a5b0) (0x7b9a690) Stream removed, broadcasting: 1 I0918 02:24:11.923081 7 log.go:172] (0x7b9a5b0) (0x77ee070) Stream removed, broadcasting: 3 I0918 02:24:11.923218 7 log.go:172] (0x7b9a5b0) (0x7b9a770) Stream removed, broadcasting: 5 Sep 18 02:24:11.923: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:24:11.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9748" for this suite. Sep 18 02:24:36.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:24:36.197: INFO: namespace pod-network-test-9748 deletion completed in 24.264025768s • [SLOW TEST:55.152 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:24:36.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Sep 18 02:24:36.270: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:24:44.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3167" for this suite. Sep 18 02:25:06.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:25:06.634: INFO: namespace init-container-3167 deletion completed in 22.189811192s • [SLOW TEST:30.435 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:25:06.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-315, will wait for the garbage collector to delete the pods Sep 18 02:25:10.843: INFO: Deleting Job.batch foo took: 10.438823ms Sep 18 02:25:11.145: INFO: Terminating Job.batch foo pods took: 302.686891ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:25:54.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-315" for this suite. Sep 18 02:26:00.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:26:00.722: INFO: namespace job-315 deletion completed in 6.159162167s • [SLOW TEST:54.087 seconds] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:26:00.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-959df8e6-b1cc-4f13-9390-a42b3f6a630e STEP: Creating a pod to test consume configMaps Sep 18 02:26:00.810: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-07dc625e-bd95-4c16-b6d1-85af71ae7137" in namespace "projected-1617" to be "success or failure" Sep 18 02:26:00.825: INFO: Pod "pod-projected-configmaps-07dc625e-bd95-4c16-b6d1-85af71ae7137": Phase="Pending", Reason="", readiness=false. Elapsed: 14.169938ms Sep 18 02:26:02.832: INFO: Pod "pod-projected-configmaps-07dc625e-bd95-4c16-b6d1-85af71ae7137": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021343649s Sep 18 02:26:04.840: INFO: Pod "pod-projected-configmaps-07dc625e-bd95-4c16-b6d1-85af71ae7137": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029236723s STEP: Saw pod success Sep 18 02:26:04.840: INFO: Pod "pod-projected-configmaps-07dc625e-bd95-4c16-b6d1-85af71ae7137" satisfied condition "success or failure" Sep 18 02:26:04.846: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-07dc625e-bd95-4c16-b6d1-85af71ae7137 container projected-configmap-volume-test: STEP: delete the pod Sep 18 02:26:04.910: INFO: Waiting for pod pod-projected-configmaps-07dc625e-bd95-4c16-b6d1-85af71ae7137 to disappear Sep 18 02:26:04.914: INFO: Pod pod-projected-configmaps-07dc625e-bd95-4c16-b6d1-85af71ae7137 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:26:04.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1617" for this suite. Sep 18 02:26:10.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:26:11.083: INFO: namespace projected-1617 deletion completed in 6.161614484s • [SLOW TEST:10.357 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:26:11.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-41172f77-ae1e-4a2b-b573-96c091ba2da8 STEP: Creating a pod to test consume secrets Sep 18 02:26:11.237: INFO: Waiting up to 5m0s for pod "pod-secrets-3f2cc6a6-29aa-47f3-b9eb-51c4e1861223" in namespace "secrets-6029" to be "success or failure" Sep 18 02:26:11.243: INFO: Pod "pod-secrets-3f2cc6a6-29aa-47f3-b9eb-51c4e1861223": Phase="Pending", Reason="", readiness=false. Elapsed: 5.864035ms Sep 18 02:26:13.261: INFO: Pod "pod-secrets-3f2cc6a6-29aa-47f3-b9eb-51c4e1861223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024633159s Sep 18 02:26:15.297: INFO: Pod "pod-secrets-3f2cc6a6-29aa-47f3-b9eb-51c4e1861223": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0597574s Sep 18 02:26:17.406: INFO: Pod "pod-secrets-3f2cc6a6-29aa-47f3-b9eb-51c4e1861223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169335842s STEP: Saw pod success Sep 18 02:26:17.406: INFO: Pod "pod-secrets-3f2cc6a6-29aa-47f3-b9eb-51c4e1861223" satisfied condition "success or failure" Sep 18 02:26:17.440: INFO: Trying to get logs from node iruya-worker pod pod-secrets-3f2cc6a6-29aa-47f3-b9eb-51c4e1861223 container secret-volume-test: STEP: delete the pod Sep 18 02:26:17.459: INFO: Waiting for pod pod-secrets-3f2cc6a6-29aa-47f3-b9eb-51c4e1861223 to disappear Sep 18 02:26:17.463: INFO: Pod pod-secrets-3f2cc6a6-29aa-47f3-b9eb-51c4e1861223 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:26:17.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6029" for this suite. Sep 18 02:26:25.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:26:25.622: INFO: namespace secrets-6029 deletion completed in 8.149955688s • [SLOW TEST:14.538 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:26:25.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:26:32.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9469" for this suite. Sep 18 02:26:39.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:26:39.397: INFO: namespace emptydir-wrapper-9469 deletion completed in 6.550366262s • [SLOW TEST:13.770 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:26:39.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-022513bc-83fe-4c53-98dc-dbf0ce76df75 STEP: Creating a pod to test consume configMaps Sep 18 02:26:39.498: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3d74accb-9bb4-4f28-bc03-0dff6bcb5d7b" in namespace "projected-1892" to be "success or failure" Sep 18 02:26:39.521: INFO: Pod "pod-projected-configmaps-3d74accb-9bb4-4f28-bc03-0dff6bcb5d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.916517ms Sep 18 02:26:41.529: INFO: Pod "pod-projected-configmaps-3d74accb-9bb4-4f28-bc03-0dff6bcb5d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030642133s Sep 18 02:26:43.536: INFO: Pod "pod-projected-configmaps-3d74accb-9bb4-4f28-bc03-0dff6bcb5d7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037401624s STEP: Saw pod success Sep 18 02:26:43.536: INFO: Pod "pod-projected-configmaps-3d74accb-9bb4-4f28-bc03-0dff6bcb5d7b" satisfied condition "success or failure" Sep 18 02:26:43.541: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-3d74accb-9bb4-4f28-bc03-0dff6bcb5d7b container projected-configmap-volume-test: STEP: delete the pod Sep 18 02:26:43.892: INFO: Waiting for pod pod-projected-configmaps-3d74accb-9bb4-4f28-bc03-0dff6bcb5d7b to disappear Sep 18 02:26:43.937: INFO: Pod pod-projected-configmaps-3d74accb-9bb4-4f28-bc03-0dff6bcb5d7b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:26:43.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1892" for this suite. Sep 18 02:26:49.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:26:50.127: INFO: namespace projected-1892 deletion completed in 6.182185973s • [SLOW TEST:10.729 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:26:50.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Sep 18 02:26:50.231: INFO: Pod name pod-release: Found 0 pods out of 1 Sep 18 02:26:55.239: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:26:55.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5240" for this suite. Sep 18 02:27:03.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:27:03.518: INFO: namespace replication-controller-5240 deletion completed in 8.186864683s • [SLOW TEST:13.389 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:27:03.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Sep 18 02:27:03.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8757bc80-836e-44c0-882c-97091b317857" in namespace "downward-api-8145" to be "success or failure" Sep 18 02:27:03.679: INFO: Pod "downwardapi-volume-8757bc80-836e-44c0-882c-97091b317857": Phase="Pending", Reason="", readiness=false. Elapsed: 34.294143ms Sep 18 02:27:05.687: INFO: Pod "downwardapi-volume-8757bc80-836e-44c0-882c-97091b317857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042259899s Sep 18 02:27:07.695: INFO: Pod "downwardapi-volume-8757bc80-836e-44c0-882c-97091b317857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050515757s STEP: Saw pod success Sep 18 02:27:07.695: INFO: Pod "downwardapi-volume-8757bc80-836e-44c0-882c-97091b317857" satisfied condition "success or failure" Sep 18 02:27:07.700: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8757bc80-836e-44c0-882c-97091b317857 container client-container: STEP: delete the pod Sep 18 02:27:07.722: INFO: Waiting for pod downwardapi-volume-8757bc80-836e-44c0-882c-97091b317857 to disappear Sep 18 02:27:07.730: INFO: Pod downwardapi-volume-8757bc80-836e-44c0-882c-97091b317857 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:27:07.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8145" for this suite. Sep 18 02:27:13.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:27:13.937: INFO: namespace downward-api-8145 deletion completed in 6.199780116s • [SLOW TEST:10.414 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:27:13.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-29cbb0a2-2eec-4e8a-8903-a58cb1554754 in namespace container-probe-3555 Sep 18 02:27:18.051: INFO: Started pod busybox-29cbb0a2-2eec-4e8a-8903-a58cb1554754 in namespace container-probe-3555 STEP: checking the pod's current state and verifying that restartCount is present Sep 18 02:27:18.056: INFO: Initial restart count of pod busybox-29cbb0a2-2eec-4e8a-8903-a58cb1554754 is 0 Sep 18 02:28:08.290: INFO: Restart count of pod container-probe-3555/busybox-29cbb0a2-2eec-4e8a-8903-a58cb1554754 is now 1 (50.233230561s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:28:08.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3555" for this suite. Sep 18 02:28:14.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:28:14.508: INFO: namespace container-probe-3555 deletion completed in 6.157608783s • [SLOW TEST:60.570 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:28:14.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:28:40.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1751" for this suite. Sep 18 02:28:46.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:28:46.978: INFO: namespace namespaces-1751 deletion completed in 6.208260982s STEP: Destroying namespace "nsdeletetest-8820" for this suite. Sep 18 02:28:46.981: INFO: Namespace nsdeletetest-8820 was already deleted STEP: Destroying namespace "nsdeletetest-5552" for this suite. Sep 18 02:28:52.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:28:53.129: INFO: namespace nsdeletetest-5552 deletion completed in 6.147902969s • [SLOW TEST:38.619 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:28:53.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 18 02:28:57.784: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ddfa5ccd-f935-4165-b1b3-9b35ed81480a" Sep 18 02:28:57.785: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ddfa5ccd-f935-4165-b1b3-9b35ed81480a" in namespace "pods-7205" to be "terminated due to deadline exceeded" Sep 18 02:28:57.837: INFO: Pod "pod-update-activedeadlineseconds-ddfa5ccd-f935-4165-b1b3-9b35ed81480a": Phase="Running", Reason="", readiness=true. Elapsed: 51.83456ms Sep 18 02:28:59.844: INFO: Pod "pod-update-activedeadlineseconds-ddfa5ccd-f935-4165-b1b3-9b35ed81480a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.058720115s Sep 18 02:28:59.844: INFO: Pod "pod-update-activedeadlineseconds-ddfa5ccd-f935-4165-b1b3-9b35ed81480a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:28:59.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7205" for this suite. Sep 18 02:29:05.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:29:06.019: INFO: namespace pods-7205 deletion completed in 6.164096037s • [SLOW TEST:12.884 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:29:06.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-4414f794-d12f-42c3-8975-3266bb97a45b Sep 18 02:29:06.130: INFO: Pod name my-hostname-basic-4414f794-d12f-42c3-8975-3266bb97a45b: Found 0 pods out of 1 Sep 18 02:29:11.139: INFO: Pod name my-hostname-basic-4414f794-d12f-42c3-8975-3266bb97a45b: Found 1 pods out of 1 Sep 18 02:29:11.140: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4414f794-d12f-42c3-8975-3266bb97a45b" are running Sep 18 02:29:11.147: INFO: Pod "my-hostname-basic-4414f794-d12f-42c3-8975-3266bb97a45b-kplw2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-18 02:29:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-18 02:29:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-18 02:29:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-18 02:29:06 +0000 UTC Reason: Message:}]) Sep 18 02:29:11.149: INFO: Trying to dial the pod Sep 18 02:29:16.440: INFO: Controller my-hostname-basic-4414f794-d12f-42c3-8975-3266bb97a45b: Got expected result from replica 1 [my-hostname-basic-4414f794-d12f-42c3-8975-3266bb97a45b-kplw2]: "my-hostname-basic-4414f794-d12f-42c3-8975-3266bb97a45b-kplw2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:29:16.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6670" for this suite. Sep 18 02:29:22.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:29:22.638: INFO: namespace replication-controller-6670 deletion completed in 6.190047812s • [SLOW TEST:16.618 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:29:22.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Sep 18 02:29:22.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9623' Sep 18 02:29:30.305: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 18 02:29:30.306: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Sep 18 02:29:30.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9623' Sep 18 02:29:31.499: INFO: stderr: "" Sep 18 02:29:31.499: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:29:31.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9623" for this suite. Sep 18 02:29:37.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:29:37.649: INFO: namespace kubectl-9623 deletion completed in 6.134726808s • [SLOW TEST:15.009 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:29:37.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Sep 18 02:29:37.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9618' Sep 18 02:29:39.369: INFO: stderr: "" Sep 18 02:29:39.369: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Sep 18 02:29:40.379: INFO: Selector matched 1 pods for map[app:redis] Sep 18 02:29:40.381: INFO: Found 0 / 1 Sep 18 02:29:41.378: INFO: Selector matched 1 pods for map[app:redis] Sep 18 02:29:41.378: INFO: Found 0 / 1 Sep 18 02:29:42.379: INFO: Selector matched 1 pods for map[app:redis] Sep 18 02:29:42.379: INFO: Found 0 / 1 Sep 18 02:29:43.378: INFO: Selector matched 1 pods for map[app:redis] Sep 18 02:29:43.379: INFO: Found 1 / 1 Sep 18 02:29:43.379: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 18 02:29:43.385: INFO: Selector matched 1 pods for map[app:redis] Sep 18 02:29:43.385: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Sep 18 02:29:43.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bcr9x redis-master --namespace=kubectl-9618' Sep 18 02:29:44.566: INFO: stderr: "" Sep 18 02:29:44.566: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Sep 02:29:41.974 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Sep 02:29:41.974 # Server started, Redis version 3.2.12\n1:M 18 Sep 02:29:41.974 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Sep 02:29:41.974 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Sep 18 02:29:44.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bcr9x redis-master --namespace=kubectl-9618 --tail=1' Sep 18 02:29:45.722: INFO: stderr: "" Sep 18 02:29:45.722: INFO: stdout: "1:M 18 Sep 02:29:41.974 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Sep 18 02:29:45.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bcr9x redis-master --namespace=kubectl-9618 --limit-bytes=1' Sep 18 02:29:46.844: INFO: stderr: "" Sep 18 02:29:46.844: INFO: stdout: " " STEP: exposing timestamps Sep 18 02:29:46.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bcr9x redis-master --namespace=kubectl-9618 --tail=1 --timestamps' Sep 18 02:29:47.993: INFO: stderr: "" Sep 18 02:29:47.993: INFO: stdout: "2020-09-18T02:29:41.974535532Z 1:M 18 Sep 02:29:41.974 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Sep 18 02:29:50.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bcr9x redis-master --namespace=kubectl-9618 --since=1s' Sep 18 02:29:51.652: INFO: stderr: "" Sep 18 02:29:51.652: INFO: stdout: "" Sep 18 02:29:51.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bcr9x redis-master --namespace=kubectl-9618 --since=24h' Sep 18 02:29:52.807: INFO: stderr: "" Sep 18 02:29:52.808: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Sep 02:29:41.974 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Sep 02:29:41.974 # Server started, Redis version 3.2.12\n1:M 18 Sep 02:29:41.974 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Sep 02:29:41.974 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Sep 18 02:29:52.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9618' Sep 18 02:29:53.892: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 18 02:29:53.892: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Sep 18 02:29:53.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9618' Sep 18 02:29:55.060: INFO: stderr: "No resources found.\n" Sep 18 02:29:55.060: INFO: stdout: "" Sep 18 02:29:55.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9618 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 18 02:29:56.198: INFO: stderr: "" Sep 18 02:29:56.198: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:29:56.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9618" for this suite. Sep 18 02:30:02.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:30:02.813: INFO: namespace kubectl-9618 deletion completed in 6.553593903s • [SLOW TEST:25.162 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:30:02.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Sep 18 02:30:03.020: INFO: namespace kubectl-6381 Sep 18 02:30:03.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6381' Sep 18 02:30:04.498: INFO: stderr: "" Sep 18 02:30:04.498: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Sep 18 02:30:05.507: INFO: Selector matched 1 pods for map[app:redis] Sep 18 02:30:05.507: INFO: Found 0 / 1 Sep 18 02:30:06.521: INFO: Selector matched 1 pods for map[app:redis] Sep 18 02:30:06.521: INFO: Found 0 / 1 Sep 18 02:30:07.507: INFO: Selector matched 1 pods for map[app:redis] Sep 18 02:30:07.508: INFO: Found 0 / 1 Sep 18 02:30:08.507: INFO: Selector matched 1 pods for map[app:redis] Sep 18 02:30:08.507: INFO: Found 1 / 1 Sep 18 02:30:08.507: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 18 02:30:08.512: INFO: Selector matched 1 pods for map[app:redis] Sep 18 02:30:08.512: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 18 02:30:08.513: INFO: wait on redis-master startup in kubectl-6381 Sep 18 02:30:08.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-495sb redis-master --namespace=kubectl-6381' Sep 18 02:30:09.673: INFO: stderr: "" Sep 18 02:30:09.673: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Sep 02:30:06.996 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Sep 02:30:06.996 # Server started, Redis version 3.2.12\n1:M 18 Sep 02:30:06.996 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Sep 02:30:06.996 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Sep 18 02:30:09.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6381' Sep 18 02:30:10.956: INFO: stderr: "" Sep 18 02:30:10.956: INFO: stdout: "service/rm2 exposed\n" Sep 18 02:30:10.960: INFO: Service rm2 in namespace kubectl-6381 found. STEP: exposing service Sep 18 02:30:12.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6381' Sep 18 02:30:14.197: INFO: stderr: "" Sep 18 02:30:14.197: INFO: stdout: "service/rm3 exposed\n" Sep 18 02:30:14.214: INFO: Service rm3 in namespace kubectl-6381 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:30:16.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6381" for this suite. Sep 18 02:30:40.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:30:41.404: INFO: namespace kubectl-6381 deletion completed in 25.16810465s • [SLOW TEST:38.590 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:30:41.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Sep 18 02:30:48.498: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c0957746-f49f-4207-8acb-7d2f70f9f3f8,GenerateName:,Namespace:events-7574,SelfLink:/api/v1/namespaces/events-7574/pods/send-events-c0957746-f49f-4207-8acb-7d2f70f9f3f8,UID:ebdb44be-c98e-42f9-b20d-940008cf9f8b,ResourceVersion:783829,Generation:0,CreationTimestamp:2020-09-18 02:30:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 736023804,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4qnc5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4qnc5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-4qnc5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x831a330} {node.kubernetes.io/unreachable Exists NoExecute 0x831a350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:30:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:30:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:30:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:30:41 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.212,StartTime:2020-09-18 02:30:42 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-09-18 02:30:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://5ce252e67a97bb98a3d0a404f2d84eed972bafad395b99d6f8fe9caf63ebe24a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Sep 18 02:30:50.524: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Sep 18 02:30:52.530: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 18 02:30:52.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7574" for this suite. Sep 18 02:31:36.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 18 02:31:36.793: INFO: namespace events-7574 deletion completed in 44.215400492s • [SLOW TEST:55.388 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 18 02:31:36.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Sep 18 02:31:36.914: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Sep 18 02:31:43.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Sep 18 02:31:44.607: INFO: stderr: ""
Sep 18 02:31:44.607: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:31:44.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7617" for this suite.
Sep 18 02:31:50.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:31:50.793: INFO: namespace kubectl-7617 deletion completed in 6.17283046s

• [SLOW TEST:7.432 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:31:50.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Sep 18 02:31:50.858: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Sep 18 02:32:01.725: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Sep 18 02:32:04.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735993121, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735993121, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735993121, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735993121, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 18 02:32:06.840: INFO: Waited 631.366824ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:32:07.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4692" for this suite.
Sep 18 02:32:13.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:32:13.731: INFO: namespace aggregator-4692 deletion completed in 6.441549333s

• [SLOW TEST:22.937 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:32:13.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4970
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Sep 18 02:32:13.860: INFO: Found 0 stateful pods, waiting for 3
Sep 18 02:32:23.870: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 02:32:23.870: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 02:32:23.871: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 02:32:23.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 18 02:32:25.358: INFO: stderr: "I0918 02:32:25.163263     418 log.go:172] (0x24ae9a0) (0x24af260) Create stream\nI0918 02:32:25.166358     418 log.go:172] (0x24ae9a0) (0x24af260) Stream added, broadcasting: 1\nI0918 02:32:25.178770     418 log.go:172] (0x24ae9a0) Reply frame received for 1\nI0918 02:32:25.179871     418 log.go:172] (0x24ae9a0) (0x2ace000) Create stream\nI0918 02:32:25.179988     418 log.go:172] (0x24ae9a0) (0x2ace000) Stream added, broadcasting: 3\nI0918 02:32:25.181863     418 log.go:172] (0x24ae9a0) Reply frame received for 3\nI0918 02:32:25.182213     418 log.go:172] (0x24ae9a0) (0x2956000) Create stream\nI0918 02:32:25.182301     418 log.go:172] (0x24ae9a0) (0x2956000) Stream added, broadcasting: 5\nI0918 02:32:25.183531     418 log.go:172] (0x24ae9a0) Reply frame received for 5\nI0918 02:32:25.252624     418 log.go:172] (0x24ae9a0) Data frame received for 5\nI0918 02:32:25.253119     418 log.go:172] (0x2956000) (5) Data frame handling\nI0918 02:32:25.254153     418 log.go:172] (0x2956000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0918 02:32:25.339325     418 log.go:172] (0x24ae9a0) Data frame received for 3\nI0918 02:32:25.339568     418 log.go:172] (0x2ace000) (3) Data frame handling\nI0918 02:32:25.339842     418 log.go:172] (0x24ae9a0) Data frame received for 5\nI0918 02:32:25.340055     418 log.go:172] (0x2956000) (5) Data frame handling\nI0918 02:32:25.340367     418 log.go:172] (0x2ace000) (3) Data frame sent\nI0918 02:32:25.340509     418 log.go:172] (0x24ae9a0) Data frame received for 3\nI0918 02:32:25.340622     418 log.go:172] (0x2ace000) (3) Data frame handling\nI0918 02:32:25.341706     418 log.go:172] (0x24ae9a0) Data frame received for 1\nI0918 02:32:25.341896     418 log.go:172] (0x24af260) (1) Data frame handling\nI0918 02:32:25.342100     418 log.go:172] (0x24af260) (1) Data frame sent\nI0918 02:32:25.344694     418 log.go:172] (0x24ae9a0) (0x24af260) Stream removed, broadcasting: 1\nI0918 02:32:25.345200     418 log.go:172] (0x24ae9a0) Go away received\nI0918 02:32:25.349946     418 log.go:172] (0x24ae9a0) (0x24af260) Stream removed, broadcasting: 1\nI0918 02:32:25.350380     418 log.go:172] (0x24ae9a0) (0x2ace000) Stream removed, broadcasting: 3\nI0918 02:32:25.350715     418 log.go:172] (0x24ae9a0) (0x2956000) Stream removed, broadcasting: 5\n"
Sep 18 02:32:25.359: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 18 02:32:25.360: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Sep 18 02:32:35.415: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Sep 18 02:32:45.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:32:46.966: INFO: stderr: "I0918 02:32:46.879801     441 log.go:172] (0x2a30230) (0x2a305b0) Create stream\nI0918 02:32:46.883489     441 log.go:172] (0x2a30230) (0x2a305b0) Stream added, broadcasting: 1\nI0918 02:32:46.894951     441 log.go:172] (0x2a30230) Reply frame received for 1\nI0918 02:32:46.895433     441 log.go:172] (0x2a30230) (0x269c1c0) Create stream\nI0918 02:32:46.895503     441 log.go:172] (0x2a30230) (0x269c1c0) Stream added, broadcasting: 3\nI0918 02:32:46.897137     441 log.go:172] (0x2a30230) Reply frame received for 3\nI0918 02:32:46.897480     441 log.go:172] (0x2a30230) (0x2a30690) Create stream\nI0918 02:32:46.897571     441 log.go:172] (0x2a30230) (0x2a30690) Stream added, broadcasting: 5\nI0918 02:32:46.898905     441 log.go:172] (0x2a30230) Reply frame received for 5\nI0918 02:32:46.947510     441 log.go:172] (0x2a30230) Data frame received for 3\nI0918 02:32:46.947786     441 log.go:172] (0x269c1c0) (3) Data frame handling\nI0918 02:32:46.948060     441 log.go:172] (0x2a30230) Data frame received for 5\nI0918 02:32:46.948392     441 log.go:172] (0x2a30690) (5) Data frame handling\nI0918 02:32:46.948571     441 log.go:172] (0x2a30230) Data frame received for 1\nI0918 02:32:46.948758     441 log.go:172] (0x2a305b0) (1) Data frame handling\nI0918 02:32:46.948938     441 log.go:172] (0x269c1c0) (3) Data frame sent\nI0918 02:32:46.949252     441 log.go:172] (0x2a30690) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0918 02:32:46.949532     441 log.go:172] (0x2a305b0) (1) Data frame sent\nI0918 02:32:46.950343     441 log.go:172] (0x2a30230) Data frame received for 5\nI0918 02:32:46.950425     441 log.go:172] (0x2a30690) (5) Data frame handling\nI0918 02:32:46.950648     441 log.go:172] (0x2a30230) Data frame received for 3\nI0918 02:32:46.952024     441 log.go:172] (0x2a30230) (0x2a305b0) Stream removed, broadcasting: 1\nI0918 02:32:46.953640     441 log.go:172] (0x269c1c0) (3) Data frame handling\nI0918 02:32:46.954031     441 log.go:172] (0x2a30230) Go away received\nI0918 02:32:46.958112     441 log.go:172] (0x2a30230) (0x2a305b0) Stream removed, broadcasting: 1\nI0918 02:32:46.958448     441 log.go:172] (0x2a30230) (0x269c1c0) Stream removed, broadcasting: 3\nI0918 02:32:46.958716     441 log.go:172] (0x2a30230) (0x2a30690) Stream removed, broadcasting: 5\n"
Sep 18 02:32:46.967: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 18 02:32:46.968: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 18 02:32:57.030: INFO: Waiting for StatefulSet statefulset-4970/ss2 to complete update
Sep 18 02:32:57.031: INFO: Waiting for Pod statefulset-4970/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Sep 18 02:32:57.031: INFO: Waiting for Pod statefulset-4970/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Sep 18 02:33:07.169: INFO: Waiting for StatefulSet statefulset-4970/ss2 to complete update
Sep 18 02:33:07.169: INFO: Waiting for Pod statefulset-4970/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Sep 18 02:33:17.045: INFO: Waiting for StatefulSet statefulset-4970/ss2 to complete update
STEP: Rolling back to a previous revision
Sep 18 02:33:27.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 18 02:33:28.464: INFO: stderr: "I0918 02:33:28.314566     463 log.go:172] (0x27aa150) (0x27aa3f0) Create stream\nI0918 02:33:28.318099     463 log.go:172] (0x27aa150) (0x27aa3f0) Stream added, broadcasting: 1\nI0918 02:33:28.334508     463 log.go:172] (0x27aa150) Reply frame received for 1\nI0918 02:33:28.335199     463 log.go:172] (0x27aa150) (0x28321c0) Create stream\nI0918 02:33:28.335277     463 log.go:172] (0x27aa150) (0x28321c0) Stream added, broadcasting: 3\nI0918 02:33:28.337884     463 log.go:172] (0x27aa150) Reply frame received for 3\nI0918 02:33:28.338145     463 log.go:172] (0x27aa150) (0x27aa540) Create stream\nI0918 02:33:28.338210     463 log.go:172] (0x27aa150) (0x27aa540) Stream added, broadcasting: 5\nI0918 02:33:28.339411     463 log.go:172] (0x27aa150) Reply frame received for 5\nI0918 02:33:28.419460     463 log.go:172] (0x27aa150) Data frame received for 5\nI0918 02:33:28.419800     463 log.go:172] (0x27aa540) (5) Data frame handling\nI0918 02:33:28.420625     463 log.go:172] (0x27aa540) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0918 02:33:28.444689     463 log.go:172] (0x27aa150) Data frame received for 3\nI0918 02:33:28.444896     463 log.go:172] (0x28321c0) (3) Data frame handling\nI0918 02:33:28.445138     463 log.go:172] (0x27aa150) Data frame received for 5\nI0918 02:33:28.445557     463 log.go:172] (0x27aa540) (5) Data frame handling\nI0918 02:33:28.445840     463 log.go:172] (0x28321c0) (3) Data frame sent\nI0918 02:33:28.446014     463 log.go:172] (0x27aa150) Data frame received for 3\nI0918 02:33:28.446200     463 log.go:172] (0x28321c0) (3) Data frame handling\nI0918 02:33:28.446619     463 log.go:172] (0x27aa150) Data frame received for 1\nI0918 02:33:28.446741     463 log.go:172] (0x27aa3f0) (1) Data frame handling\nI0918 02:33:28.446872     463 log.go:172] (0x27aa3f0) (1) Data frame sent\nI0918 02:33:28.448655     463 log.go:172] (0x27aa150) (0x27aa3f0) Stream removed, broadcasting: 1\nI0918 02:33:28.451935     463 log.go:172] (0x27aa150) Go away received\nI0918 02:33:28.454795     463 log.go:172] (0x27aa150) (0x27aa3f0) Stream removed, broadcasting: 1\nI0918 02:33:28.455181     463 log.go:172] (0x27aa150) (0x28321c0) Stream removed, broadcasting: 3\nI0918 02:33:28.455446     463 log.go:172] (0x27aa150) (0x27aa540) Stream removed, broadcasting: 5\n"
Sep 18 02:33:28.465: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 18 02:33:28.465: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 18 02:33:38.513: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Sep 18 02:33:48.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:33:49.916: INFO: stderr: "I0918 02:33:49.794662     486 log.go:172] (0x262ccb0) (0x262cd20) Create stream\nI0918 02:33:49.800443     486 log.go:172] (0x262ccb0) (0x262cd20) Stream added, broadcasting: 1\nI0918 02:33:49.817831     486 log.go:172] (0x262ccb0) Reply frame received for 1\nI0918 02:33:49.818884     486 log.go:172] (0x262ccb0) (0x2766000) Create stream\nI0918 02:33:49.819054     486 log.go:172] (0x262ccb0) (0x2766000) Stream added, broadcasting: 3\nI0918 02:33:49.821024     486 log.go:172] (0x262ccb0) Reply frame received for 3\nI0918 02:33:49.821305     486 log.go:172] (0x262ccb0) (0x25d4000) Create stream\nI0918 02:33:49.821386     486 log.go:172] (0x262ccb0) (0x25d4000) Stream added, broadcasting: 5\nI0918 02:33:49.822648     486 log.go:172] (0x262ccb0) Reply frame received for 5\nI0918 02:33:49.899722     486 log.go:172] (0x262ccb0) Data frame received for 3\nI0918 02:33:49.900282     486 log.go:172] (0x2766000) (3) Data frame handling\nI0918 02:33:49.900563     486 log.go:172] (0x262ccb0) Data frame received for 5\nI0918 02:33:49.900811     486 log.go:172] (0x25d4000) (5) Data frame handling\nI0918 02:33:49.901490     486 log.go:172] (0x25d4000) (5) Data frame sent\nI0918 02:33:49.901862     486 log.go:172] (0x262ccb0) Data frame received for 1\nI0918 02:33:49.901948     486 log.go:172] (0x262cd20) (1) Data frame handling\nI0918 02:33:49.902088     486 log.go:172] (0x2766000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0918 02:33:49.902737     486 log.go:172] (0x262ccb0) Data frame received for 3\nI0918 02:33:49.902881     486 log.go:172] (0x2766000) (3) Data frame handling\nI0918 02:33:49.903027     486 log.go:172] (0x262ccb0) Data frame received for 5\nI0918 02:33:49.903286     486 log.go:172] (0x25d4000) (5) Data frame handling\nI0918 02:33:49.903554     486 log.go:172] (0x262cd20) (1) Data frame sent\nI0918 02:33:49.904558     486 log.go:172] (0x262ccb0) (0x262cd20) Stream removed, broadcasting: 1\nI0918 02:33:49.907484     486 log.go:172] (0x262ccb0) Go away received\nI0918 02:33:49.909070     486 log.go:172] (0x262ccb0) (0x262cd20) Stream removed, broadcasting: 1\nI0918 02:33:49.909452     486 log.go:172] (0x262ccb0) (0x2766000) Stream removed, broadcasting: 3\nI0918 02:33:49.909703     486 log.go:172] (0x262ccb0) (0x25d4000) Stream removed, broadcasting: 5\n"
Sep 18 02:33:49.917: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 18 02:33:49.917: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 18 02:33:59.958: INFO: Waiting for StatefulSet statefulset-4970/ss2 to complete update
Sep 18 02:33:59.958: INFO: Waiting for Pod statefulset-4970/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Sep 18 02:33:59.959: INFO: Waiting for Pod statefulset-4970/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Sep 18 02:33:59.959: INFO: Waiting for Pod statefulset-4970/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Sep 18 02:34:09.973: INFO: Waiting for StatefulSet statefulset-4970/ss2 to complete update
Sep 18 02:34:09.973: INFO: Waiting for Pod statefulset-4970/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Sep 18 02:34:09.973: INFO: Waiting for Pod statefulset-4970/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Sep 18 02:34:19.972: INFO: Waiting for StatefulSet statefulset-4970/ss2 to complete update
Sep 18 02:34:19.972: INFO: Waiting for Pod statefulset-4970/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep 18 02:34:29.974: INFO: Deleting all statefulset in ns statefulset-4970
Sep 18 02:34:29.980: INFO: Scaling statefulset ss2 to 0
Sep 18 02:34:50.001: INFO: Waiting for statefulset status.replicas updated to 0
Sep 18 02:34:50.006: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:34:50.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4970" for this suite.
Sep 18 02:34:56.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:34:56.206: INFO: namespace statefulset-4970 deletion completed in 6.176188206s

• [SLOW TEST:162.474 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:34:56.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9359
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-9359
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9359
Sep 18 02:34:56.323: INFO: Found 0 stateful pods, waiting for 1
Sep 18 02:35:06.331: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Sep 18 02:35:06.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9359 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 18 02:35:07.719: INFO: stderr: "I0918 02:35:07.581354     508 log.go:172] (0x29060e0) (0x2906230) Create stream\nI0918 02:35:07.585151     508 log.go:172] (0x29060e0) (0x2906230) Stream added, broadcasting: 1\nI0918 02:35:07.602942     508 log.go:172] (0x29060e0) Reply frame received for 1\nI0918 02:35:07.603447     508 log.go:172] (0x29060e0) (0x24a4620) Create stream\nI0918 02:35:07.603514     508 log.go:172] (0x29060e0) (0x24a4620) Stream added, broadcasting: 3\nI0918 02:35:07.604986     508 log.go:172] (0x29060e0) Reply frame received for 3\nI0918 02:35:07.605246     508 log.go:172] (0x29060e0) (0x29068c0) Create stream\nI0918 02:35:07.605314     508 log.go:172] (0x29060e0) (0x29068c0) Stream added, broadcasting: 5\nI0918 02:35:07.606600     508 log.go:172] (0x29060e0) Reply frame received for 5\nI0918 02:35:07.665564     508 log.go:172] (0x29060e0) Data frame received for 5\nI0918 02:35:07.665818     508 log.go:172] (0x29068c0) (5) Data frame handling\nI0918 02:35:07.666159     508 log.go:172] (0x29068c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0918 02:35:07.703418     508 log.go:172] (0x29060e0) Data frame received for 3\nI0918 02:35:07.703680     508 log.go:172] (0x24a4620) (3) Data frame handling\nI0918 02:35:07.703941     508 log.go:172] (0x29060e0) Data frame received for 5\nI0918 02:35:07.704406     508 log.go:172] (0x24a4620) (3) Data frame sent\nI0918 02:35:07.704635     508 log.go:172] (0x29060e0) Data frame received for 3\nI0918 02:35:07.704797     508 log.go:172] (0x29068c0) (5) Data frame handling\nI0918 02:35:07.705191     508 log.go:172] (0x24a4620) (3) Data frame handling\nI0918 02:35:07.706296     508 log.go:172] (0x29060e0) Data frame received for 1\nI0918 02:35:07.706417     508 log.go:172] (0x2906230) (1) Data frame handling\nI0918 02:35:07.706571     508 log.go:172] (0x2906230) (1) Data frame sent\nI0918 02:35:07.707657     508 log.go:172] (0x29060e0) (0x2906230) Stream removed, broadcasting: 1\nI0918 02:35:07.710170     508 log.go:172] (0x29060e0) Go away received\nI0918 02:35:07.713125     508 log.go:172] (0x29060e0) (0x2906230) Stream removed, broadcasting: 1\nI0918 02:35:07.713294     508 log.go:172] (0x29060e0) (0x24a4620) Stream removed, broadcasting: 3\nI0918 02:35:07.713440     508 log.go:172] (0x29060e0) (0x29068c0) Stream removed, broadcasting: 5\n"
Sep 18 02:35:07.720: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 18 02:35:07.720: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 18 02:35:07.728: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Sep 18 02:35:17.735: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep 18 02:35:17.735: INFO: Waiting for statefulset status.replicas updated to 0
Sep 18 02:35:17.757: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Sep 18 02:35:17.758: INFO: ss-0  iruya-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  }]
Sep 18 02:35:17.759: INFO: ss-1                Pending         []
Sep 18 02:35:17.759: INFO: 
Sep 18 02:35:17.759: INFO: StatefulSet ss has not reached scale 3, at 2
Sep 18 02:35:18.767: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990823898s
Sep 18 02:35:19.773: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982470974s
Sep 18 02:35:20.780: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.976966281s
Sep 18 02:35:21.791: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.969560382s
Sep 18 02:35:22.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.959221453s
Sep 18 02:35:23.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.945675807s
Sep 18 02:35:24.821: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.937506485s
Sep 18 02:35:25.830: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.928992637s
Sep 18 02:35:26.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 919.401687ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9359
Sep 18 02:35:27.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9359 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:35:29.227: INFO: stderr: "I0918 02:35:29.095632     530 log.go:172] (0x2662150) (0x2662230) Create stream\nI0918 02:35:29.098229     530 log.go:172] (0x2662150) (0x2662230) Stream added, broadcasting: 1\nI0918 02:35:29.109411     530 log.go:172] (0x2662150) Reply frame received for 1\nI0918 02:35:29.110522     530 log.go:172] (0x2662150) (0x24a27e0) Create stream\nI0918 02:35:29.110658     530 log.go:172] (0x2662150) (0x24a27e0) Stream added, broadcasting: 3\nI0918 02:35:29.113039     530 log.go:172] (0x2662150) Reply frame received for 3\nI0918 02:35:29.113541     530 log.go:172] (0x2662150) (0x24a2930) Create stream\nI0918 02:35:29.113670     530 log.go:172] (0x2662150) (0x24a2930) Stream added, broadcasting: 5\nI0918 02:35:29.115676     530 log.go:172] (0x2662150) Reply frame received for 5\nI0918 02:35:29.205669     530 log.go:172] (0x2662150) Data frame received for 3\nI0918 02:35:29.206149     530 log.go:172] (0x2662150) Data frame received for 5\nI0918 02:35:29.206383     530 log.go:172] (0x24a2930) (5) Data frame handling\nI0918 02:35:29.206559     530 log.go:172] (0x2662150) Data frame received for 1\nI0918 02:35:29.206701     530 log.go:172] (0x24a27e0) (3) Data frame handling\nI0918 02:35:29.206953     530 log.go:172] (0x2662230) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0918 02:35:29.208474     530 log.go:172] (0x24a27e0) (3) Data frame sent\nI0918 02:35:29.208682     530 log.go:172] (0x24a2930) (5) Data frame sent\nI0918 02:35:29.208873     530 log.go:172] (0x2662150) Data frame received for 5\nI0918 02:35:29.209276     530 log.go:172] (0x24a2930) (5) Data frame handling\nI0918 02:35:29.209496     530 log.go:172] (0x2662230) (1) Data frame sent\nI0918 02:35:29.209821     530 log.go:172] (0x2662150) Data frame received for 3\nI0918 02:35:29.210172     530 log.go:172] (0x24a27e0) (3) Data frame handling\nI0918 02:35:29.211669     530 log.go:172] (0x2662150) (0x2662230) Stream removed, broadcasting: 1\nI0918 02:35:29.214020     530 log.go:172] (0x2662150) Go away received\nI0918 02:35:29.217534     530 log.go:172] (0x2662150) (0x2662230) Stream removed, broadcasting: 1\nI0918 02:35:29.217868     530 log.go:172] (0x2662150) (0x24a27e0) Stream removed, broadcasting: 3\nI0918 02:35:29.218136     530 log.go:172] (0x2662150) (0x24a2930) Stream removed, broadcasting: 5\n"
Sep 18 02:35:29.229: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 18 02:35:29.229: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 18 02:35:29.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9359 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:35:30.596: INFO: stderr: "I0918 02:35:30.504786     552 log.go:172] (0x29f19d0) (0x29f1a40) Create stream\nI0918 02:35:30.507062     552 log.go:172] (0x29f19d0) (0x29f1a40) Stream added, broadcasting: 1\nI0918 02:35:30.529060     552 log.go:172] (0x29f19d0) Reply frame received for 1\nI0918 02:35:30.529594     552 log.go:172] (0x29f19d0) (0x25ac070) Create stream\nI0918 02:35:30.529673     552 log.go:172] (0x29f19d0) (0x25ac070) Stream added, broadcasting: 3\nI0918 02:35:30.531022     552 log.go:172] (0x29f19d0) Reply frame received for 3\nI0918 02:35:30.531332     552 log.go:172] (0x29f19d0) (0x29f0000) Create stream\nI0918 02:35:30.531416     552 log.go:172] (0x29f19d0) (0x29f0000) Stream added, broadcasting: 5\nI0918 02:35:30.532684     552 log.go:172] (0x29f19d0) Reply frame received for 5\nI0918 02:35:30.579692     552 log.go:172] (0x29f19d0) Data frame received for 3\nI0918 02:35:30.580072     552 log.go:172] (0x29f19d0) Data frame received for 5\nI0918 02:35:30.580281     552 log.go:172] (0x29f0000) (5) Data frame handling\nI0918 02:35:30.580475     552 log.go:172] (0x29f19d0) Data frame received for 1\nI0918 02:35:30.580659     552 log.go:172] (0x29f1a40) (1) Data frame handling\nI0918 02:35:30.580866     552 log.go:172] (0x25ac070) (3) Data frame handling\nI0918 02:35:30.581124     552 log.go:172] (0x29f0000) (5) Data frame sent\nI0918 02:35:30.581440     552 log.go:172] (0x29f1a40) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0918 02:35:30.581622     552 log.go:172] (0x25ac070) (3) Data frame sent\nI0918 02:35:30.581798     552 log.go:172] (0x29f19d0) Data frame received for 3\nI0918 02:35:30.581925     552 log.go:172] (0x25ac070) (3) Data frame handling\nI0918 02:35:30.582215     552 log.go:172] (0x29f19d0) Data frame received for 5\nI0918 02:35:30.582335     552 log.go:172] (0x29f0000) (5) Data frame handling\nI0918 02:35:30.585041     552 log.go:172] (0x29f19d0) (0x29f1a40) Stream removed, broadcasting: 1\nI0918 02:35:30.586577     552 log.go:172] (0x29f19d0) Go away received\nI0918 02:35:30.588556     552 log.go:172] (0x29f19d0) (0x29f1a40) Stream removed, broadcasting: 1\nI0918 02:35:30.589018     552 log.go:172] (0x29f19d0) (0x25ac070) Stream removed, broadcasting: 3\nI0918 02:35:30.589225     552 log.go:172] (0x29f19d0) (0x29f0000) Stream removed, broadcasting: 5\n"
Sep 18 02:35:30.597: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 18 02:35:30.597: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 18 02:35:30.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9359 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:35:31.960: INFO: stderr: "I0918 02:35:31.834275     573 log.go:172] (0x28e02a0) (0x28e0380) Create stream\nI0918 02:35:31.838141     573 log.go:172] (0x28e02a0) (0x28e0380) Stream added, broadcasting: 1\nI0918 02:35:31.849751     573 log.go:172] (0x28e02a0) Reply frame received for 1\nI0918 02:35:31.850763     573 log.go:172] (0x28e02a0) (0x28e03f0) Create stream\nI0918 02:35:31.850890     573 log.go:172] (0x28e02a0) (0x28e03f0) Stream added, broadcasting: 3\nI0918 02:35:31.853252     573 log.go:172] (0x28e02a0) Reply frame received for 3\nI0918 02:35:31.853725     573 log.go:172] (0x28e02a0) (0x28e0460) Create stream\nI0918 02:35:31.853834     573 log.go:172] (0x28e02a0) (0x28e0460) Stream added, broadcasting: 5\nI0918 02:35:31.855777     573 log.go:172] (0x28e02a0) Reply frame received for 5\nI0918 02:35:31.939947     573 log.go:172] (0x28e02a0) Data frame received for 3\nI0918 02:35:31.940596     573 log.go:172] (0x28e02a0) Data frame received for 5\nI0918 02:35:31.940882     573 log.go:172] (0x28e02a0) Data frame received for 1\nI0918 02:35:31.941269     573 log.go:172] (0x28e0380) (1) Data frame handling\nI0918 02:35:31.941953     573 log.go:172] (0x28e0460) (5) Data frame handling\nI0918 02:35:31.942245     573 log.go:172] (0x28e03f0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0918 02:35:31.944795     573 log.go:172] (0x28e0460) (5) Data frame sent\nI0918 02:35:31.945108     573 log.go:172] (0x28e0380) (1) Data frame sent\nI0918 02:35:31.945434     573 log.go:172] (0x28e03f0) (3) Data frame sent\nI0918 02:35:31.945646     573 log.go:172] (0x28e02a0) Data frame received for 3\nI0918 02:35:31.945767     573 log.go:172] (0x28e03f0) (3) Data frame handling\nI0918 02:35:31.946725     573 log.go:172] (0x28e02a0) Data frame received for 5\nI0918 02:35:31.946893     573 log.go:172] (0x28e02a0) (0x28e0380) Stream removed, broadcasting: 1\nI0918 02:35:31.947904     573 log.go:172] (0x28e0460) (5) Data frame handling\nI0918 02:35:31.948536     573 log.go:172] (0x28e02a0) Go away received\nI0918 02:35:31.951179     573 log.go:172] (0x28e02a0) (0x28e0380) Stream removed, broadcasting: 1\nI0918 02:35:31.951783     573 log.go:172] (0x28e02a0) (0x28e03f0) Stream removed, broadcasting: 3\nI0918 02:35:31.952012     573 log.go:172] (0x28e02a0) (0x28e0460) Stream removed, broadcasting: 5\n"
Sep 18 02:35:31.963: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 18 02:35:31.963: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 18 02:35:31.971: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 02:35:31.972: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 02:35:31.972: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Sep 18 02:35:31.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9359 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 18 02:35:33.355: INFO: stderr: "I0918 02:35:33.236087     595 log.go:172] (0x2b00000) (0x2b00070) Create stream\nI0918 02:35:33.239343     595 log.go:172] (0x2b00000) (0x2b00070) Stream added, broadcasting: 1\nI0918 02:35:33.249748     595 log.go:172] (0x2b00000) Reply frame received for 1\nI0918 02:35:33.250665     595 log.go:172] (0x2b00000) (0x27f01c0) Create stream\nI0918 02:35:33.250773     595 log.go:172] (0x2b00000) (0x27f01c0) Stream added, broadcasting: 3\nI0918 02:35:33.252665     595 log.go:172] (0x2b00000) Reply frame received for 3\nI0918 02:35:33.253107     595 log.go:172] (0x2b00000) (0x2b000e0) Create stream\nI0918 02:35:33.253216     595 log.go:172] (0x2b00000) (0x2b000e0) Stream added, broadcasting: 5\nI0918 02:35:33.255182     595 log.go:172] (0x2b00000) Reply frame received for 5\nI0918 02:35:33.338234     595 log.go:172] (0x2b00000) Data frame received for 3\nI0918 02:35:33.338663     595 log.go:172] (0x2b00000) Data frame received for 5\nI0918 02:35:33.338844     595 log.go:172] (0x2b000e0) (5) Data frame handling\nI0918 02:35:33.339127     595 log.go:172] (0x27f01c0) (3) Data frame handling\nI0918 02:35:33.339593     595 log.go:172] (0x2b00000) Data frame received for 1\nI0918 02:35:33.339727     595 log.go:172] (0x2b00070) (1) Data frame handling\nI0918 02:35:33.340252     595 log.go:172] (0x2b00070) (1) Data frame sent\nI0918 02:35:33.340452     595 log.go:172] (0x2b000e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0918 02:35:33.340951     595 log.go:172] (0x27f01c0) (3) Data frame sent\nI0918 02:35:33.341131     595 log.go:172] (0x2b00000) Data frame received for 5\nI0918 02:35:33.341243     595 log.go:172] (0x2b000e0) (5) Data frame handling\nI0918 02:35:33.341397     595 log.go:172] (0x2b00000) Data frame received for 3\nI0918 02:35:33.341574     595 log.go:172] (0x27f01c0) (3) Data frame handling\nI0918 02:35:33.342792     595 log.go:172] (0x2b00000) (0x2b00070) Stream removed, broadcasting: 1\nI0918 02:35:33.345621     595 log.go:172] (0x2b00000) Go away received\nI0918 02:35:33.346976     595 log.go:172] (0x2b00000) (0x2b00070) Stream removed, broadcasting: 1\nI0918 02:35:33.347279     595 log.go:172] (0x2b00000) (0x27f01c0) Stream removed, broadcasting: 3\nI0918 02:35:33.347645     595 log.go:172] (0x2b00000) (0x2b000e0) Stream removed, broadcasting: 5\n"
Sep 18 02:35:33.356: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 18 02:35:33.356: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 18 02:35:33.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9359 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 18 02:35:34.822: INFO: stderr: "I0918 02:35:34.665695     616 log.go:172] (0x2bb6070) (0x2bb60e0) Create stream\nI0918 02:35:34.670670     616 log.go:172] (0x2bb6070) (0x2bb60e0) Stream added, broadcasting: 1\nI0918 02:35:34.687957     616 log.go:172] (0x2bb6070) Reply frame received for 1\nI0918 02:35:34.688424     616 log.go:172] (0x2bb6070) (0x282d2d0) Create stream\nI0918 02:35:34.688497     616 log.go:172] (0x2bb6070) (0x282d2d0) Stream added, broadcasting: 3\nI0918 02:35:34.690027     616 log.go:172] (0x2bb6070) Reply frame received for 3\nI0918 02:35:34.690351     616 log.go:172] (0x2bb6070) (0x282d3b0) Create stream\nI0918 02:35:34.690443     616 log.go:172] (0x2bb6070) (0x282d3b0) Stream added, broadcasting: 5\nI0918 02:35:34.691799     616 log.go:172] (0x2bb6070) Reply frame received for 5\nI0918 02:35:34.773155     616 log.go:172] (0x2bb6070) Data frame received for 5\nI0918 02:35:34.773525     616 log.go:172] (0x282d3b0) (5) Data frame handling\nI0918 02:35:34.774191     616 log.go:172] (0x282d3b0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0918 02:35:34.805040     616 log.go:172] (0x2bb6070) Data frame received for 5\nI0918 02:35:34.805326     616 log.go:172] (0x282d3b0) (5) Data frame handling\nI0918 02:35:34.805623     616 log.go:172] (0x2bb6070) Data frame received for 3\nI0918 02:35:34.805799     616 log.go:172] (0x282d2d0) (3) Data frame handling\nI0918 02:35:34.805955     616 log.go:172] (0x282d2d0) (3) Data frame sent\nI0918 02:35:34.806097     616 log.go:172] (0x2bb6070) Data frame received for 3\nI0918 02:35:34.806250     616 log.go:172] (0x282d2d0) (3) Data frame handling\nI0918 02:35:34.806768     616 log.go:172] (0x2bb6070) Data frame received for 1\nI0918 02:35:34.806948     616 log.go:172] (0x2bb60e0) (1) Data frame handling\nI0918 02:35:34.807114     616 log.go:172] (0x2bb60e0) (1) Data frame sent\nI0918 02:35:34.808880     616 log.go:172] (0x2bb6070) (0x2bb60e0) Stream removed, broadcasting: 1\nI0918 02:35:34.810183     616 log.go:172] (0x2bb6070) Go away received\nI0918 02:35:34.812938     616 log.go:172] (0x2bb6070) (0x2bb60e0) Stream removed, broadcasting: 1\nI0918 02:35:34.813394     616 log.go:172] (0x2bb6070) (0x282d2d0) Stream removed, broadcasting: 3\nI0918 02:35:34.813614     616 log.go:172] (0x2bb6070) (0x282d3b0) Stream removed, broadcasting: 5\n"
Sep 18 02:35:34.824: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 18 02:35:34.824: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 18 02:35:34.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9359 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 18 02:35:36.284: INFO: stderr: "I0918 02:35:36.109049     639 log.go:172] (0x2b2e000) (0x2b2e070) Create stream\nI0918 02:35:36.114585     639 log.go:172] (0x2b2e000) (0x2b2e070) Stream added, broadcasting: 1\nI0918 02:35:36.129633     639 log.go:172] (0x2b2e000) Reply frame received for 1\nI0918 02:35:36.130370     639 log.go:172] (0x2b2e000) (0x282ca10) Create stream\nI0918 02:35:36.130475     639 log.go:172] (0x2b2e000) (0x282ca10) Stream added, broadcasting: 3\nI0918 02:35:36.132488     639 log.go:172] (0x2b2e000) Reply frame received for 3\nI0918 02:35:36.132975     639 log.go:172] (0x2b2e000) (0x2b2e0e0) Create stream\nI0918 02:35:36.133095     639 log.go:172] (0x2b2e000) (0x2b2e0e0) Stream added, broadcasting: 5\nI0918 02:35:36.134780     639 log.go:172] (0x2b2e000) Reply frame received for 5\nI0918 02:35:36.233460     639 log.go:172] (0x2b2e000) Data frame received for 5\nI0918 02:35:36.233837     639 log.go:172] (0x2b2e0e0) (5) Data frame handling\nI0918 02:35:36.234517     639 log.go:172] (0x2b2e0e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0918 02:35:36.266582     639 log.go:172] (0x2b2e000) Data frame received for 3\nI0918 02:35:36.266728     639 log.go:172] (0x282ca10) (3) Data frame handling\nI0918 02:35:36.266896     639 log.go:172] (0x282ca10) (3) Data frame sent\nI0918 02:35:36.267032     639 log.go:172] (0x2b2e000) Data frame received for 3\nI0918 02:35:36.267226     639 log.go:172] (0x2b2e000) Data frame received for 5\nI0918 02:35:36.267455     639 log.go:172] (0x2b2e0e0) (5) Data frame handling\nI0918 02:35:36.267561     639 log.go:172] (0x282ca10) (3) Data frame handling\nI0918 02:35:36.268308     639 log.go:172] (0x2b2e000) Data frame received for 1\nI0918 02:35:36.268392     639 log.go:172] (0x2b2e070) (1) Data frame handling\nI0918 02:35:36.268491     639 log.go:172] (0x2b2e070) (1) Data frame sent\nI0918 02:35:36.270260     639 log.go:172] (0x2b2e000) (0x2b2e070) Stream removed, broadcasting: 1\nI0918 02:35:36.272390     639 log.go:172] (0x2b2e000) Go away received\nI0918 02:35:36.275987     639 log.go:172] (0x2b2e000) (0x2b2e070) Stream removed, broadcasting: 1\nI0918 02:35:36.276380     639 log.go:172] (0x2b2e000) (0x282ca10) Stream removed, broadcasting: 3\nI0918 02:35:36.276635     639 log.go:172] (0x2b2e000) (0x2b2e0e0) Stream removed, broadcasting: 5\n"
Sep 18 02:35:36.287: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 18 02:35:36.287: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 18 02:35:36.288: INFO: Waiting for statefulset status.replicas updated to 0
Sep 18 02:35:36.294: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Sep 18 02:35:46.312: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep 18 02:35:46.312: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Sep 18 02:35:46.312: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Sep 18 02:35:46.331: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep 18 02:35:46.331: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  }]
Sep 18 02:35:46.332: INFO: ss-1  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:46.332: INFO: ss-2  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:46.332: INFO: 
Sep 18 02:35:46.332: INFO: StatefulSet ss has not reached scale 0, at 3
Sep 18 02:35:47.488: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep 18 02:35:47.489: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  }]
Sep 18 02:35:47.490: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:47.490: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:47.491: INFO: 
Sep 18 02:35:47.491: INFO: StatefulSet ss has not reached scale 0, at 3
Sep 18 02:35:48.500: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep 18 02:35:48.500: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  }]
Sep 18 02:35:48.501: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:48.501: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:48.502: INFO: 
Sep 18 02:35:48.502: INFO: StatefulSet ss has not reached scale 0, at 3
Sep 18 02:35:49.519: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Sep 18 02:35:49.519: INFO: ss-0  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  }]
Sep 18 02:35:49.520: INFO: ss-2  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:49.520: INFO: 
Sep 18 02:35:49.520: INFO: StatefulSet ss has not reached scale 0, at 2
Sep 18 02:35:50.528: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Sep 18 02:35:50.529: INFO: ss-0  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  }]
Sep 18 02:35:50.529: INFO: ss-2  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:50.530: INFO: 
Sep 18 02:35:50.530: INFO: StatefulSet ss has not reached scale 0, at 2
Sep 18 02:35:51.538: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Sep 18 02:35:51.539: INFO: ss-0  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  }]
Sep 18 02:35:51.539: INFO: ss-2  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:51.540: INFO: 
Sep 18 02:35:51.540: INFO: StatefulSet ss has not reached scale 0, at 2
Sep 18 02:35:52.549: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Sep 18 02:35:52.549: INFO: ss-0  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  }]
Sep 18 02:35:52.550: INFO: ss-2  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:52.550: INFO: 
Sep 18 02:35:52.550: INFO: StatefulSet ss has not reached scale 0, at 2
Sep 18 02:35:53.573: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Sep 18 02:35:53.573: INFO: ss-0  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:34:56 +0000 UTC  }]
Sep 18 02:35:53.573: INFO: ss-2  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:53.574: INFO: 
Sep 18 02:35:53.574: INFO: StatefulSet ss has not reached scale 0, at 2
Sep 18 02:35:54.632: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Sep 18 02:35:54.632: INFO: ss-2  iruya-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 02:35:17 +0000 UTC  }]
Sep 18 02:35:54.633: INFO: 
Sep 18 02:35:54.633: INFO: StatefulSet ss has not reached scale 0, at 1
Sep 18 02:35:55.638: INFO: Verifying statefulset ss doesn't scale past 0 for another 687.537439ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9359
Sep 18 02:35:56.645: INFO: Scaling statefulset ss to 0
Sep 18 02:35:56.659: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep 18 02:35:56.664: INFO: Deleting all statefulset in ns statefulset-9359
Sep 18 02:35:56.669: INFO: Scaling statefulset ss to 0
Sep 18 02:35:56.682: INFO: Waiting for statefulset status.replicas updated to 0
Sep 18 02:35:56.686: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:35:56.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9359" for this suite.
Sep 18 02:36:02.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:36:02.883: INFO: namespace statefulset-9359 deletion completed in 6.176332905s

• [SLOW TEST:66.672 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:36:02.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6815
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6815
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6815
Sep 18 02:36:03.062: INFO: Found 0 stateful pods, waiting for 1
Sep 18 02:36:13.099: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Sep 18 02:36:13.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 18 02:36:14.486: INFO: stderr: "I0918 02:36:14.352446     661 log.go:172] (0x2b0c460) (0x2b0c4d0) Create stream\nI0918 02:36:14.356478     661 log.go:172] (0x2b0c460) (0x2b0c4d0) Stream added, broadcasting: 1\nI0918 02:36:14.365843     661 log.go:172] (0x2b0c460) Reply frame received for 1\nI0918 02:36:14.366642     661 log.go:172] (0x2b0c460) (0x27a5030) Create stream\nI0918 02:36:14.366734     661 log.go:172] (0x2b0c460) (0x27a5030) Stream added, broadcasting: 3\nI0918 02:36:14.368942     661 log.go:172] (0x2b0c460) Reply frame received for 3\nI0918 02:36:14.369493     661 log.go:172] (0x2b0c460) (0x290c000) Create stream\nI0918 02:36:14.369636     661 log.go:172] (0x2b0c460) (0x290c000) Stream added, broadcasting: 5\nI0918 02:36:14.371628     661 log.go:172] (0x2b0c460) Reply frame received for 5\nI0918 02:36:14.443755     661 log.go:172] (0x2b0c460) Data frame received for 5\nI0918 02:36:14.444066     661 log.go:172] (0x290c000) (5) Data frame handling\nI0918 02:36:14.444862     661 log.go:172] (0x290c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0918 02:36:14.471037     661 log.go:172] (0x2b0c460) Data frame received for 3\nI0918 02:36:14.471244     661 log.go:172] (0x27a5030) (3) Data frame handling\nI0918 02:36:14.471384     661 log.go:172] (0x27a5030) (3) Data frame sent\nI0918 02:36:14.471503     661 log.go:172] (0x2b0c460) Data frame received for 3\nI0918 02:36:14.471597     661 log.go:172] (0x27a5030) (3) Data frame handling\nI0918 02:36:14.471866     661 log.go:172] (0x2b0c460) Data frame received for 5\nI0918 02:36:14.472110     661 log.go:172] (0x290c000) (5) Data frame handling\nI0918 02:36:14.472477     661 log.go:172] (0x2b0c460) Data frame received for 1\nI0918 02:36:14.472573     661 log.go:172] (0x2b0c4d0) (1) Data frame handling\nI0918 02:36:14.472663     661 log.go:172] (0x2b0c4d0) (1) Data frame sent\nI0918 02:36:14.473992     661 log.go:172] (0x2b0c460) (0x2b0c4d0) Stream removed, broadcasting: 1\nI0918 02:36:14.477003     661 log.go:172] (0x2b0c460) Go away received\nI0918 02:36:14.479658     661 log.go:172] (0x2b0c460) (0x2b0c4d0) Stream removed, broadcasting: 1\nI0918 02:36:14.479827     661 log.go:172] (0x2b0c460) (0x27a5030) Stream removed, broadcasting: 3\nI0918 02:36:14.479984     661 log.go:172] (0x2b0c460) (0x290c000) Stream removed, broadcasting: 5\n"
Sep 18 02:36:14.488: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 18 02:36:14.488: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 18 02:36:14.495: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Sep 18 02:36:24.503: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep 18 02:36:24.503: INFO: Waiting for statefulset status.replicas updated to 0
Sep 18 02:36:24.547: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999986847s
Sep 18 02:36:25.555: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.970729274s
Sep 18 02:36:26.563: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.962666498s
Sep 18 02:36:27.945: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.95507801s
Sep 18 02:36:28.953: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.573148686s
Sep 18 02:36:29.962: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.565201038s
Sep 18 02:36:30.970: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.556313828s
Sep 18 02:36:32.040: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.547832181s
Sep 18 02:36:33.047: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.477915907s
Sep 18 02:36:34.078: INFO: Verifying statefulset ss doesn't scale past 1 for another 470.620675ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6815
Sep 18 02:36:35.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:36:36.479: INFO: stderr: "I0918 02:36:36.358434     685 log.go:172] (0x2b162a0) (0x2b16310) Create stream\nI0918 02:36:36.362361     685 log.go:172] (0x2b162a0) (0x2b16310) Stream added, broadcasting: 1\nI0918 02:36:36.382548     685 log.go:172] (0x2b162a0) Reply frame received for 1\nI0918 02:36:36.383101     685 log.go:172] (0x2b162a0) (0x29f4000) Create stream\nI0918 02:36:36.383180     685 log.go:172] (0x2b162a0) (0x29f4000) Stream added, broadcasting: 3\nI0918 02:36:36.384665     685 log.go:172] (0x2b162a0) Reply frame received for 3\nI0918 02:36:36.385014     685 log.go:172] (0x2b162a0) (0x24ea1c0) Create stream\nI0918 02:36:36.385141     685 log.go:172] (0x2b162a0) (0x24ea1c0) Stream added, broadcasting: 5\nI0918 02:36:36.386341     685 log.go:172] (0x2b162a0) Reply frame received for 5\nI0918 02:36:36.460506     685 log.go:172] (0x2b162a0) Data frame received for 5\nI0918 02:36:36.460832     685 log.go:172] (0x2b162a0) Data frame received for 1\nI0918 02:36:36.461319     685 log.go:172] (0x2b162a0) Data frame received for 3\nI0918 02:36:36.461584     685 log.go:172] (0x2b16310) (1) Data frame handling\nI0918 02:36:36.461966     685 log.go:172] (0x29f4000) (3) Data frame handling\nI0918 02:36:36.462198     685 log.go:172] (0x24ea1c0) (5) Data frame handling\nI0918 02:36:36.463042     685 log.go:172] (0x29f4000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0918 02:36:36.463441     685 log.go:172] (0x2b16310) (1) Data frame sent\nI0918 02:36:36.463838     685 log.go:172] (0x2b162a0) Data frame received for 3\nI0918 02:36:36.464046     685 log.go:172] (0x29f4000) (3) Data frame handling\nI0918 02:36:36.464598     685 log.go:172] (0x24ea1c0) (5) Data frame sent\nI0918 02:36:36.464697     685 log.go:172] (0x2b162a0) Data frame received for 5\nI0918 02:36:36.464770     685 log.go:172] (0x24ea1c0) (5) Data frame handling\nI0918 02:36:36.466197     685 log.go:172] (0x2b162a0) (0x2b16310) Stream removed, broadcasting: 1\nI0918 02:36:36.467977     685 log.go:172] (0x2b162a0) Go away received\nI0918 02:36:36.471471     685 log.go:172] (0x2b162a0) (0x2b16310) Stream removed, broadcasting: 1\nI0918 02:36:36.471727     685 log.go:172] (0x2b162a0) (0x29f4000) Stream removed, broadcasting: 3\nI0918 02:36:36.471937     685 log.go:172] (0x2b162a0) (0x24ea1c0) Stream removed, broadcasting: 5\n"
Sep 18 02:36:36.480: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 18 02:36:36.480: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 18 02:36:36.487: INFO: Found 1 stateful pods, waiting for 3
Sep 18 02:36:46.497: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 02:36:46.497: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 02:36:46.497: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Sep 18 02:36:46.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 18 02:36:47.872: INFO: stderr: "I0918 02:36:47.757431     707 log.go:172] (0x24a48c0) (0x24a4a10) Create stream\nI0918 02:36:47.760918     707 log.go:172] (0x24a48c0) (0x24a4a10) Stream added, broadcasting: 1\nI0918 02:36:47.778663     707 log.go:172] (0x24a48c0) Reply frame received for 1\nI0918 02:36:47.779611     707 log.go:172] (0x24a48c0) (0x291a000) Create stream\nI0918 02:36:47.779745     707 log.go:172] (0x24a48c0) (0x291a000) Stream added, broadcasting: 3\nI0918 02:36:47.781506     707 log.go:172] (0x24a48c0) Reply frame received for 3\nI0918 02:36:47.781770     707 log.go:172] (0x24a48c0) (0x24a4a80) Create stream\nI0918 02:36:47.781828     707 log.go:172] (0x24a48c0) (0x24a4a80) Stream added, broadcasting: 5\nI0918 02:36:47.782963     707 log.go:172] (0x24a48c0) Reply frame received for 5\nI0918 02:36:47.855408     707 log.go:172] (0x24a48c0) Data frame received for 3\nI0918 02:36:47.855699     707 log.go:172] (0x24a48c0) Data frame received for 1\nI0918 02:36:47.856061     707 log.go:172] (0x24a48c0) Data frame received for 5\nI0918 02:36:47.856484     707 log.go:172] (0x24a4a80) (5) Data frame handling\nI0918 02:36:47.856858     707 log.go:172] (0x291a000) (3) Data frame handling\nI0918 02:36:47.857134     707 log.go:172] (0x24a4a10) (1) Data frame handling\nI0918 02:36:47.858690     707 log.go:172] (0x24a4a80) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0918 02:36:47.859596     707 log.go:172] (0x291a000) (3) Data frame sent\nI0918 02:36:47.859725     707 log.go:172] (0x24a48c0) Data frame received for 3\nI0918 02:36:47.859812     707 log.go:172] (0x291a000) (3) Data frame handling\nI0918 02:36:47.860388     707 log.go:172] (0x24a48c0) Data frame received for 5\nI0918 02:36:47.860492     707 log.go:172] (0x24a4a80) (5) Data frame handling\nI0918 02:36:47.860589     707 log.go:172] (0x24a4a10) (1) Data frame sent\nI0918 02:36:47.861523     707 log.go:172] (0x24a48c0) (0x24a4a10) Stream removed, broadcasting: 1\nI0918 02:36:47.864344     707 log.go:172] (0x24a48c0) Go away received\nI0918 02:36:47.865384     707 log.go:172] (0x24a48c0) (0x24a4a10) Stream removed, broadcasting: 1\nI0918 02:36:47.865706     707 log.go:172] (0x24a48c0) (0x291a000) Stream removed, broadcasting: 3\nI0918 02:36:47.866111     707 log.go:172] (0x24a48c0) (0x24a4a80) Stream removed, broadcasting: 5\n"
Sep 18 02:36:47.873: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 18 02:36:47.873: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 18 02:36:47.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 18 02:36:49.254: INFO: stderr: "I0918 02:36:49.103179     731 log.go:172] (0x2660070) (0x26602a0) Create stream\nI0918 02:36:49.106247     731 log.go:172] (0x2660070) (0x26602a0) Stream added, broadcasting: 1\nI0918 02:36:49.114761     731 log.go:172] (0x2660070) Reply frame received for 1\nI0918 02:36:49.115208     731 log.go:172] (0x2660070) (0x269a1c0) Create stream\nI0918 02:36:49.115273     731 log.go:172] (0x2660070) (0x269a1c0) Stream added, broadcasting: 3\nI0918 02:36:49.117041     731 log.go:172] (0x2660070) Reply frame received for 3\nI0918 02:36:49.117603     731 log.go:172] (0x2660070) (0x24a27e0) Create stream\nI0918 02:36:49.117743     731 log.go:172] (0x2660070) (0x24a27e0) Stream added, broadcasting: 5\nI0918 02:36:49.119833     731 log.go:172] (0x2660070) Reply frame received for 5\nI0918 02:36:49.204061     731 log.go:172] (0x2660070) Data frame received for 5\nI0918 02:36:49.204608     731 log.go:172] (0x24a27e0) (5) Data frame handling\nI0918 02:36:49.205403     731 log.go:172] (0x24a27e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0918 02:36:49.234964     731 log.go:172] (0x2660070) Data frame received for 3\nI0918 02:36:49.235096     731 log.go:172] (0x269a1c0) (3) Data frame handling\nI0918 02:36:49.235232     731 log.go:172] (0x269a1c0) (3) Data frame sent\nI0918 02:36:49.235366     731 log.go:172] (0x2660070) Data frame received for 3\nI0918 02:36:49.235458     731 log.go:172] (0x269a1c0) (3) Data frame handling\nI0918 02:36:49.235800     731 log.go:172] (0x2660070) Data frame received for 5\nI0918 02:36:49.236040     731 log.go:172] (0x24a27e0) (5) Data frame handling\nI0918 02:36:49.237871     731 log.go:172] (0x2660070) Data frame received for 1\nI0918 02:36:49.238044     731 log.go:172] (0x26602a0) (1) Data frame handling\nI0918 02:36:49.238213     731 log.go:172] (0x26602a0) (1) Data frame sent\nI0918 02:36:49.239039     731 log.go:172] (0x2660070) (0x26602a0) Stream removed, broadcasting: 1\nI0918 02:36:49.243396     731 log.go:172] (0x2660070) Go away received\nI0918 02:36:49.245631     731 log.go:172] (0x2660070) (0x26602a0) Stream removed, broadcasting: 1\nI0918 02:36:49.246090     731 log.go:172] (0x2660070) (0x269a1c0) Stream removed, broadcasting: 3\nI0918 02:36:49.246452     731 log.go:172] (0x2660070) (0x24a27e0) Stream removed, broadcasting: 5\n"
Sep 18 02:36:49.255: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 18 02:36:49.255: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 18 02:36:49.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 18 02:36:50.666: INFO: stderr: "I0918 02:36:50.538469     753 log.go:172] (0x2a782a0) (0x2a78310) Create stream\nI0918 02:36:50.541411     753 log.go:172] (0x2a782a0) (0x2a78310) Stream added, broadcasting: 1\nI0918 02:36:50.555459     753 log.go:172] (0x2a782a0) Reply frame received for 1\nI0918 02:36:50.555916     753 log.go:172] (0x2a782a0) (0x24ac8c0) Create stream\nI0918 02:36:50.555990     753 log.go:172] (0x2a782a0) (0x24ac8c0) Stream added, broadcasting: 3\nI0918 02:36:50.557173     753 log.go:172] (0x2a782a0) Reply frame received for 3\nI0918 02:36:50.557394     753 log.go:172] (0x2a782a0) (0x291e070) Create stream\nI0918 02:36:50.557458     753 log.go:172] (0x2a782a0) (0x291e070) Stream added, broadcasting: 5\nI0918 02:36:50.558281     753 log.go:172] (0x2a782a0) Reply frame received for 5\nI0918 02:36:50.620366     753 log.go:172] (0x2a782a0) Data frame received for 5\nI0918 02:36:50.620739     753 log.go:172] (0x291e070) (5) Data frame handling\nI0918 02:36:50.621437     753 log.go:172] (0x291e070) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0918 02:36:50.653064     753 log.go:172] (0x2a782a0) Data frame received for 5\nI0918 02:36:50.653258     753 log.go:172] (0x291e070) (5) Data frame handling\nI0918 02:36:50.653512     753 log.go:172] (0x2a782a0) Data frame received for 3\nI0918 02:36:50.653771     753 log.go:172] (0x24ac8c0) (3) Data frame handling\nI0918 02:36:50.654012     753 log.go:172] (0x24ac8c0) (3) Data frame sent\nI0918 02:36:50.654245     753 log.go:172] (0x2a782a0) Data frame received for 3\nI0918 02:36:50.654417     753 log.go:172] (0x24ac8c0) (3) Data frame handling\nI0918 02:36:50.654587     753 log.go:172] (0x2a782a0) Data frame received for 1\nI0918 02:36:50.654699     753 log.go:172] (0x2a78310) (1) Data frame handling\nI0918 02:36:50.654797     753 log.go:172] (0x2a78310) (1) Data frame sent\nI0918 02:36:50.655466     753 log.go:172] (0x2a782a0) (0x2a78310) Stream removed, broadcasting: 1\nI0918 02:36:50.657933     753 log.go:172] (0x2a782a0) Go away received\nI0918 02:36:50.659482     753 log.go:172] (0x2a782a0) (0x2a78310) Stream removed, broadcasting: 1\nI0918 02:36:50.659649     753 log.go:172] (0x2a782a0) (0x24ac8c0) Stream removed, broadcasting: 3\nI0918 02:36:50.659804     753 log.go:172] (0x2a782a0) (0x291e070) Stream removed, broadcasting: 5\n"
Sep 18 02:36:50.667: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 18 02:36:50.668: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 18 02:36:50.668: INFO: Waiting for statefulset status.replicas updated to 0
Sep 18 02:36:50.673: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Sep 18 02:37:00.689: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep 18 02:37:00.689: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Sep 18 02:37:00.689: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Sep 18 02:37:00.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999989223s
Sep 18 02:37:01.721: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990256497s
Sep 18 02:37:02.732: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979973409s
Sep 18 02:37:03.741: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969039289s
Sep 18 02:37:04.752: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959314632s
Sep 18 02:37:05.762: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.948979222s
Sep 18 02:37:06.773: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.938467118s
Sep 18 02:37:07.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.927852887s
Sep 18 02:37:08.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.905888548s
Sep 18 02:37:09.817: INFO: Verifying statefulset ss doesn't scale past 3 for another 894.661161ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6815
Sep 18 02:37:10.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:37:12.207: INFO: stderr: "I0918 02:37:12.101948     775 log.go:172] (0x29efd50) (0x29efdc0) Create stream\nI0918 02:37:12.107228     775 log.go:172] (0x29efd50) (0x29efdc0) Stream added, broadcasting: 1\nI0918 02:37:12.124210     775 log.go:172] (0x29efd50) Reply frame received for 1\nI0918 02:37:12.124743     775 log.go:172] (0x29efd50) (0x2828d20) Create stream\nI0918 02:37:12.124836     775 log.go:172] (0x29efd50) (0x2828d20) Stream added, broadcasting: 3\nI0918 02:37:12.126130     775 log.go:172] (0x29efd50) Reply frame received for 3\nI0918 02:37:12.126464     775 log.go:172] (0x29efd50) (0x2962230) Create stream\nI0918 02:37:12.126549     775 log.go:172] (0x29efd50) (0x2962230) Stream added, broadcasting: 5\nI0918 02:37:12.127694     775 log.go:172] (0x29efd50) Reply frame received for 5\nI0918 02:37:12.188790     775 log.go:172] (0x29efd50) Data frame received for 3\nI0918 02:37:12.189034     775 log.go:172] (0x2828d20) (3) Data frame handling\nI0918 02:37:12.189265     775 log.go:172] (0x29efd50) Data frame received for 5\nI0918 02:37:12.189532     775 log.go:172] (0x2962230) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0918 02:37:12.189745     775 log.go:172] (0x2828d20) (3) Data frame sent\nI0918 02:37:12.189948     775 log.go:172] (0x29efd50) Data frame received for 1\nI0918 02:37:12.190095     775 log.go:172] (0x2962230) (5) Data frame sent\nI0918 02:37:12.190241     775 log.go:172] (0x29efd50) Data frame received for 3\nI0918 02:37:12.190321     775 log.go:172] (0x2828d20) (3) Data frame handling\nI0918 02:37:12.190424     775 log.go:172] (0x29efd50) Data frame received for 5\nI0918 02:37:12.190574     775 log.go:172] (0x2962230) (5) Data frame handling\nI0918 02:37:12.190693     775 log.go:172] (0x29efdc0) (1) Data frame handling\nI0918 02:37:12.190841     775 log.go:172] (0x29efdc0) (1) Data frame sent\nI0918 02:37:12.192829     775 log.go:172] (0x29efd50) (0x29efdc0) Stream removed, broadcasting: 1\nI0918 02:37:12.194576     775 log.go:172] (0x29efd50) Go away received\nI0918 02:37:12.198580     775 log.go:172] (0x29efd50) (0x29efdc0) Stream removed, broadcasting: 1\nI0918 02:37:12.198928     775 log.go:172] (0x29efd50) (0x2828d20) Stream removed, broadcasting: 3\nI0918 02:37:12.199214     775 log.go:172] (0x29efd50) (0x2962230) Stream removed, broadcasting: 5\n"
Sep 18 02:37:12.208: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 18 02:37:12.208: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 18 02:37:12.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:37:13.573: INFO: stderr: "I0918 02:37:13.450712     796 log.go:172] (0x2abce00) (0x2abce70) Create stream\nI0918 02:37:13.455230     796 log.go:172] (0x2abce00) (0x2abce70) Stream added, broadcasting: 1\nI0918 02:37:13.470270     796 log.go:172] (0x2abce00) Reply frame received for 1\nI0918 02:37:13.470853     796 log.go:172] (0x2abce00) (0x273c850) Create stream\nI0918 02:37:13.470965     796 log.go:172] (0x2abce00) (0x273c850) Stream added, broadcasting: 3\nI0918 02:37:13.472534     796 log.go:172] (0x2abce00) Reply frame received for 3\nI0918 02:37:13.472924     796 log.go:172] (0x2abce00) (0x2960000) Create stream\nI0918 02:37:13.473037     796 log.go:172] (0x2abce00) (0x2960000) Stream added, broadcasting: 5\nI0918 02:37:13.474444     796 log.go:172] (0x2abce00) Reply frame received for 5\nI0918 02:37:13.553768     796 log.go:172] (0x2abce00) Data frame received for 5\nI0918 02:37:13.554037     796 log.go:172] (0x2960000) (5) Data frame handling\nI0918 02:37:13.554303     796 log.go:172] (0x2abce00) Data frame received for 3\nI0918 02:37:13.554530     796 log.go:172] (0x273c850) (3) Data frame handling\nI0918 02:37:13.554775     796 log.go:172] (0x2960000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0918 02:37:13.555179     796 log.go:172] (0x273c850) (3) Data frame sent\nI0918 02:37:13.555425     796 log.go:172] (0x2abce00) Data frame received for 3\nI0918 02:37:13.556041     796 log.go:172] (0x273c850) (3) Data frame handling\nI0918 02:37:13.557096     796 log.go:172] (0x2abce00) Data frame received for 1\nI0918 02:37:13.557641     796 log.go:172] (0x2abce00) Data frame received for 5\nI0918 02:37:13.557774     796 log.go:172] (0x2960000) (5) Data frame handling\nI0918 02:37:13.557919     796 log.go:172] (0x2abce70) (1) Data frame handling\nI0918 02:37:13.558118     796 log.go:172] (0x2abce70) (1) Data frame sent\nI0918 02:37:13.560180     796 log.go:172] (0x2abce00) (0x2abce70) Stream removed, broadcasting: 1\nI0918 02:37:13.561249     796 log.go:172] (0x2abce00) Go away received\nI0918 02:37:13.565120     796 log.go:172] (0x2abce00) (0x2abce70) Stream removed, broadcasting: 1\nI0918 02:37:13.565426     796 log.go:172] (0x2abce00) (0x273c850) Stream removed, broadcasting: 3\nI0918 02:37:13.565676     796 log.go:172] (0x2abce00) (0x2960000) Stream removed, broadcasting: 5\n"
Sep 18 02:37:13.575: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 18 02:37:13.575: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 18 02:37:13.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:37:15.172: INFO: rc: 1
Sep 18 02:37:15.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    I0918 02:37:14.824741     817 log.go:172] (0x24b8930) (0x24b9110) Create stream
I0918 02:37:14.826469     817 log.go:172] (0x24b8930) (0x24b9110) Stream added, broadcasting: 1
I0918 02:37:14.834382     817 log.go:172] (0x24b8930) Reply frame received for 1
I0918 02:37:14.834817     817 log.go:172] (0x24b8930) (0x26a80e0) Create stream
I0918 02:37:14.834907     817 log.go:172] (0x24b8930) (0x26a80e0) Stream added, broadcasting: 3
I0918 02:37:14.836624     817 log.go:172] (0x24b8930) Reply frame received for 3
I0918 02:37:14.837190     817 log.go:172] (0x24b8930) (0x24b9880) Create stream
I0918 02:37:14.837310     817 log.go:172] (0x24b8930) (0x24b9880) Stream added, broadcasting: 5
I0918 02:37:14.839073     817 log.go:172] (0x24b8930) Reply frame received for 5
I0918 02:37:15.152337     817 log.go:172] (0x24b8930) Data frame received for 1
I0918 02:37:15.153404     817 log.go:172] (0x24b8930) (0x26a80e0) Stream removed, broadcasting: 3
I0918 02:37:15.155228     817 log.go:172] (0x24b8930) (0x24b9880) Stream removed, broadcasting: 5
I0918 02:37:15.155515     817 log.go:172] (0x24b9110) (1) Data frame handling
I0918 02:37:15.158043     817 log.go:172] (0x24b9110) (1) Data frame sent
I0918 02:37:15.158686     817 log.go:172] (0x24b8930) (0x24b9110) Stream removed, broadcasting: 1
I0918 02:37:15.159029     817 log.go:172] (0x24b8930) Go away received
I0918 02:37:15.163445     817 log.go:172] (0x24b8930) (0x24b9110) Stream removed, broadcasting: 1
I0918 02:37:15.163812     817 log.go:172] (0x24b8930) (0x26a80e0) Stream removed, broadcasting: 3
I0918 02:37:15.163947     817 log.go:172] (0x24b8930) (0x24b9880) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "3b4b39e24658172059bc40fe516a5c249c295f6fc966a1a825ef47da4d321bad": task ecc429cda1475d1656ee911f5d248c6294cee39cdd343440cbc750e95109982a not found: not found
 []  0x778a120 exit status 1   true [0x8dd8050 0x8dd8070 0x8dd8090] [0x8dd8050 0x8dd8070 0x8dd8090] [0x8dd8068 0x8dd8088] [0x6bbb70 0x6bbb70] 0x803c2c0 }:
Command stdout:

stderr:
I0918 02:37:14.824741     817 log.go:172] (0x24b8930) (0x24b9110) Create stream
I0918 02:37:14.826469     817 log.go:172] (0x24b8930) (0x24b9110) Stream added, broadcasting: 1
I0918 02:37:14.834382     817 log.go:172] (0x24b8930) Reply frame received for 1
I0918 02:37:14.834817     817 log.go:172] (0x24b8930) (0x26a80e0) Create stream
I0918 02:37:14.834907     817 log.go:172] (0x24b8930) (0x26a80e0) Stream added, broadcasting: 3
I0918 02:37:14.836624     817 log.go:172] (0x24b8930) Reply frame received for 3
I0918 02:37:14.837190     817 log.go:172] (0x24b8930) (0x24b9880) Create stream
I0918 02:37:14.837310     817 log.go:172] (0x24b8930) (0x24b9880) Stream added, broadcasting: 5
I0918 02:37:14.839073     817 log.go:172] (0x24b8930) Reply frame received for 5
I0918 02:37:15.152337     817 log.go:172] (0x24b8930) Data frame received for 1
I0918 02:37:15.153404     817 log.go:172] (0x24b8930) (0x26a80e0) Stream removed, broadcasting: 3
I0918 02:37:15.155228     817 log.go:172] (0x24b8930) (0x24b9880) Stream removed, broadcasting: 5
I0918 02:37:15.155515     817 log.go:172] (0x24b9110) (1) Data frame handling
I0918 02:37:15.158043     817 log.go:172] (0x24b9110) (1) Data frame sent
I0918 02:37:15.158686     817 log.go:172] (0x24b8930) (0x24b9110) Stream removed, broadcasting: 1
I0918 02:37:15.159029     817 log.go:172] (0x24b8930) Go away received
I0918 02:37:15.163445     817 log.go:172] (0x24b8930) (0x24b9110) Stream removed, broadcasting: 1
I0918 02:37:15.163812     817 log.go:172] (0x24b8930) (0x26a80e0) Stream removed, broadcasting: 3
I0918 02:37:15.163947     817 log.go:172] (0x24b8930) (0x24b9880) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "3b4b39e24658172059bc40fe516a5c249c295f6fc966a1a825ef47da4d321bad": task ecc429cda1475d1656ee911f5d248c6294cee39cdd343440cbc750e95109982a not found: not found

error:
exit status 1
Sep 18 02:37:25.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:37:26.297: INFO: rc: 1
Sep 18 02:37:26.298: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8c13800 exit status 1   true [0x8c3c450 0x8c3c470 0x8c3c490] [0x8c3c450 0x8c3c470 0x8c3c490] [0x8c3c468 0x8c3c488] [0x6bbb70 0x6bbb70] 0x8d9d080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:37:36.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:37:37.451: INFO: rc: 1
Sep 18 02:37:37.451: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x771e8d0 exit status 1   true [0x7f821b0 0x7f821d0 0x7f82200] [0x7f821b0 0x7f821d0 0x7f82200] [0x7f821c8 0x7f821f8] [0x6bbb70 0x6bbb70] 0x7795780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:37:47.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:37:48.567: INFO: rc: 1
Sep 18 02:37:48.567: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x778a210 exit status 1   true [0x8dd8138 0x8dd8158 0x8dd8178] [0x8dd8138 0x8dd8158 0x8dd8178] [0x8dd8150 0x8dd8170] [0x6bbb70 0x6bbb70] 0x803c740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:37:58.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:37:59.663: INFO: rc: 1
Sep 18 02:37:59.663: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x771e9c0 exit status 1   true [0x7f82300 0x7f82328 0x7f82348] [0x7f82300 0x7f82328 0x7f82348] [0x7f82318 0x7f82340] [0x6bbb70 0x6bbb70] 0x7795e80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:38:09.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:38:10.820: INFO: rc: 1
Sep 18 02:38:10.821: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x771ea80 exit status 1   true [0x7f82380 0x7f823a0 0x7f823c8] [0x7f82380 0x7f823a0 0x7f823c8] [0x7f82398 0x7f823c0] [0x6bbb70 0x6bbb70] 0x88ea300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:38:20.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:38:21.924: INFO: rc: 1
Sep 18 02:38:21.924: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8c138f0 exit status 1   true [0x8c3c530 0x8c3c550 0x8c3c570] [0x8c3c530 0x8c3c550 0x8c3c570] [0x8c3c548 0x8c3c568] [0x6bbb70 0x6bbb70] 0x8d9d540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:38:31.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:38:33.041: INFO: rc: 1
Sep 18 02:38:33.042: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8c139e0 exit status 1   true [0x8c3c610 0x8c3c630 0x8c3c650] [0x8c3c610 0x8c3c630 0x8c3c650] [0x8c3c628 0x8c3c648] [0x6bbb70 0x6bbb70] 0x8d9d800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:38:43.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:38:44.146: INFO: rc: 1
Sep 18 02:38:44.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x771eb40 exit status 1   true [0x7f82418 0x7f82440 0x7f82460] [0x7f82418 0x7f82440 0x7f82460] [0x7f82430 0x7f82458] [0x6bbb70 0x6bbb70] 0x88eaa40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:38:54.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:38:55.240: INFO: rc: 1
Sep 18 02:38:55.240: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8c13ad0 exit status 1   true [0x8c3c6f0 0x8c3c710 0x8c3c730] [0x8c3c6f0 0x8c3c710 0x8c3c730] [0x8c3c708 0x8c3c728] [0x6bbb70 0x6bbb70] 0x8d9da80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:39:05.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:39:06.349: INFO: rc: 1
Sep 18 02:39:06.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x830c420 exit status 1   true [0x8906528 0x8906548 0x8906568] [0x8906528 0x8906548 0x8906568] [0x8906540 0x8906560] [0x6bbb70 0x6bbb70] 0x86b2880 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:39:16.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:39:17.458: INFO: rc: 1
Sep 18 02:39:17.459: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x778a090 exit status 1   true [0x8dd8030 0x8dd8050 0x8dd8070] [0x8dd8030 0x8dd8050 0x8dd8070] [0x8dd8048 0x8dd8068] [0x6bbb70 0x6bbb70] 0x6ec0b80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:39:27.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:39:28.767: INFO: rc: 1
Sep 18 02:39:28.768: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8c12090 exit status 1   true [0x8c3c028 0x8c3c048 0x8c3c068] [0x8c3c028 0x8c3c048 0x8c3c068] [0x8c3c040 0x8c3c060] [0x6bbb70 0x6bbb70] 0x7794cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:39:38.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:39:44.203: INFO: rc: 1
Sep 18 02:39:44.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x771e090 exit status 1   true [0x7f82058 0x7f82080 0x7f820a0] [0x7f82058 0x7f82080 0x7f820a0] [0x7f82070 0x7f82098] [0x6bbb70 0x6bbb70] 0x803c2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:39:54.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:39:55.362: INFO: rc: 1
Sep 18 02:39:55.363: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x778a1e0 exit status 1   true [0x8dd8180 0x8dd81a0 0x8dd81c0] [0x8dd8180 0x8dd81a0 0x8dd81c0] [0x8dd8198 0x8dd81b8] [0x6bbb70 0x6bbb70] 0x6ec1c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:40:05.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:40:06.506: INFO: rc: 1
Sep 18 02:40:06.506: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x830c090 exit status 1   true [0x8906028 0x8906048 0x8906068] [0x8906028 0x8906048 0x8906068] [0x8906040 0x8906060] [0x6bbb70 0x6bbb70] 0x88ea380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:40:16.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:40:17.628: INFO: rc: 1
Sep 18 02:40:17.629: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8c12180 exit status 1   true [0x8c3c108 0x8c3c148 0x8c3c180] [0x8c3c108 0x8c3c148 0x8c3c180] [0x8c3c128 0x8c3c170] [0x6bbb70 0x6bbb70] 0x77953c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:40:27.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:40:28.737: INFO: rc: 1
Sep 18 02:40:28.738: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8c12240 exit status 1   true [0x8c3c1d8 0x8c3c240 0x8c3c2a8] [0x8c3c1d8 0x8c3c240 0x8c3c2a8] [0x8c3c238 0x8c3c2a0] [0x6bbb70 0x6bbb70] 0x77959c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:40:38.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:40:39.836: INFO: rc: 1
Sep 18 02:40:39.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x830c1e0 exit status 1   true [0x8906438 0x8906458 0x8906478] [0x8906438 0x8906458 0x8906478] [0x8906450 0x8906470] [0x6bbb70 0x6bbb70] 0x88eaac0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:40:49.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:40:50.962: INFO: rc: 1
Sep 18 02:40:50.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x830c2a0 exit status 1   true [0x89064b0 0x89064d0 0x89064f0] [0x89064b0 0x89064d0 0x89064f0] [0x89064c8 0x89064e8] [0x6bbb70 0x6bbb70] 0x88eb000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:41:00.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:41:02.103: INFO: rc: 1
Sep 18 02:41:02.103: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x771e1b0 exit status 1   true [0x7f821c0 0x7f821e0 0x7f82210] [0x7f821c0 0x7f821e0 0x7f82210] [0x7f821d8 0x7f82208] [0x6bbb70 0x6bbb70] 0x803c740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:41:12.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:41:13.194: INFO: rc: 1
Sep 18 02:41:13.195: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x771e0c0 exit status 1   true [0x7f82060 0x7f82088 0x7f820a8] [0x7f82060 0x7f82088 0x7f820a8] [0x7f82080 0x7f820a0] [0x6bbb70 0x6bbb70] 0x6ec0b80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:41:23.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:41:24.299: INFO: rc: 1
Sep 18 02:41:24.299: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x771e1e0 exit status 1   true [0x7f82160 0x7f82180 0x7f821a0] [0x7f82160 0x7f82180 0x7f821a0] [0x7f82178 0x7f82198] [0x6bbb70 0x6bbb70] 0x6ec1c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:41:34.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:41:35.425: INFO: rc: 1
Sep 18 02:41:35.426: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8c120c0 exit status 1   true [0x8c3c028 0x8c3c048 0x8c3c068] [0x8c3c028 0x8c3c048 0x8c3c068] [0x8c3c040 0x8c3c060] [0x6bbb70 0x6bbb70] 0x7794cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:41:45.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:41:46.665: INFO: rc: 1
Sep 18 02:41:46.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x771e7e0 exit status 1   true [0x7f82290 0x7f822d0 0x7f822f0] [0x7f82290 0x7f822d0 0x7f822f0] [0x7f822c8 0x7f822e8] [0x6bbb70 0x6bbb70] 0x803c340 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:41:56.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:41:57.802: INFO: rc: 1
Sep 18 02:41:57.802: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x771e8d0 exit status 1   true [0x7f82398 0x7f823c0 0x7f823e0] [0x7f82398 0x7f823c0 0x7f823e0] [0x7f823b8 0x7f823d8] [0x6bbb70 0x6bbb70] 0x803c780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:42:07.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:42:08.916: INFO: rc: 1
Sep 18 02:42:08.916: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x830c0f0 exit status 1   true [0x8906090 0x89060b0 0x89060d0] [0x8906090 0x89060b0 0x89060d0] [0x89060a8 0x89060c8] [0x6bbb70 0x6bbb70] 0x88ea380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 18 02:42:18.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6815 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 18 02:42:20.020: INFO: rc: 1
Sep 18 02:42:20.021: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Sep 18 02:42:20.021: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep 18 02:42:20.047: INFO: Deleting all statefulset in ns statefulset-6815
Sep 18 02:42:20.051: INFO: Scaling statefulset ss to 0
Sep 18 02:42:20.064: INFO: Waiting for statefulset status.replicas updated to 0
Sep 18 02:42:20.067: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:42:20.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6815" for this suite.
Sep 18 02:42:26.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:42:26.295: INFO: namespace statefulset-6815 deletion completed in 6.173806386s

• [SLOW TEST:383.410 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:42:26.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:42:26.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7213" for this suite.
Sep 18 02:42:48.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:42:48.642: INFO: namespace pods-7213 deletion completed in 22.238092582s

• [SLOW TEST:22.342 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:42:48.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 18 02:42:48.750: INFO: Waiting up to 5m0s for pod "pod-f98c4e59-bc17-47b8-bad6-e6b9de6a6f0f" in namespace "emptydir-685" to be "success or failure"
Sep 18 02:42:48.758: INFO: Pod "pod-f98c4e59-bc17-47b8-bad6-e6b9de6a6f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195226ms
Sep 18 02:42:50.765: INFO: Pod "pod-f98c4e59-bc17-47b8-bad6-e6b9de6a6f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015189674s
Sep 18 02:42:52.773: INFO: Pod "pod-f98c4e59-bc17-47b8-bad6-e6b9de6a6f0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023348314s
STEP: Saw pod success
Sep 18 02:42:52.774: INFO: Pod "pod-f98c4e59-bc17-47b8-bad6-e6b9de6a6f0f" satisfied condition "success or failure"
Sep 18 02:42:52.778: INFO: Trying to get logs from node iruya-worker2 pod pod-f98c4e59-bc17-47b8-bad6-e6b9de6a6f0f container test-container: 
STEP: delete the pod
Sep 18 02:42:52.803: INFO: Waiting for pod pod-f98c4e59-bc17-47b8-bad6-e6b9de6a6f0f to disappear
Sep 18 02:42:52.807: INFO: Pod pod-f98c4e59-bc17-47b8-bad6-e6b9de6a6f0f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:42:52.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-685" for this suite.
Sep 18 02:42:58.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:42:59.024: INFO: namespace emptydir-685 deletion completed in 6.206832235s

• [SLOW TEST:10.381 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:42:59.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-836e7e72-d41c-4669-b40b-f3778cf70e9a
STEP: Creating a pod to test consume configMaps
Sep 18 02:42:59.166: INFO: Waiting up to 5m0s for pod "pod-configmaps-22235fcb-68ff-4a25-b44e-398ebe35ce9e" in namespace "configmap-5118" to be "success or failure"
Sep 18 02:42:59.188: INFO: Pod "pod-configmaps-22235fcb-68ff-4a25-b44e-398ebe35ce9e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.031439ms
Sep 18 02:43:01.199: INFO: Pod "pod-configmaps-22235fcb-68ff-4a25-b44e-398ebe35ce9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032920133s
Sep 18 02:43:03.206: INFO: Pod "pod-configmaps-22235fcb-68ff-4a25-b44e-398ebe35ce9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039658417s
STEP: Saw pod success
Sep 18 02:43:03.206: INFO: Pod "pod-configmaps-22235fcb-68ff-4a25-b44e-398ebe35ce9e" satisfied condition "success or failure"
Sep 18 02:43:03.211: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-22235fcb-68ff-4a25-b44e-398ebe35ce9e container configmap-volume-test: 
STEP: delete the pod
Sep 18 02:43:03.280: INFO: Waiting for pod pod-configmaps-22235fcb-68ff-4a25-b44e-398ebe35ce9e to disappear
Sep 18 02:43:03.358: INFO: Pod pod-configmaps-22235fcb-68ff-4a25-b44e-398ebe35ce9e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:43:03.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5118" for this suite.
Sep 18 02:43:09.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:43:09.561: INFO: namespace configmap-5118 deletion completed in 6.194020069s

• [SLOW TEST:10.536 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:43:09.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 02:43:09.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af485f79-3418-4ad4-b436-ce966533363c" in namespace "projected-1603" to be "success or failure"
Sep 18 02:43:09.722: INFO: Pod "downwardapi-volume-af485f79-3418-4ad4-b436-ce966533363c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.900479ms
Sep 18 02:43:11.743: INFO: Pod "downwardapi-volume-af485f79-3418-4ad4-b436-ce966533363c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064073917s
Sep 18 02:43:13.997: INFO: Pod "downwardapi-volume-af485f79-3418-4ad4-b436-ce966533363c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318150434s
Sep 18 02:43:16.005: INFO: Pod "downwardapi-volume-af485f79-3418-4ad4-b436-ce966533363c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.326010784s
STEP: Saw pod success
Sep 18 02:43:16.005: INFO: Pod "downwardapi-volume-af485f79-3418-4ad4-b436-ce966533363c" satisfied condition "success or failure"
Sep 18 02:43:16.011: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-af485f79-3418-4ad4-b436-ce966533363c container client-container: 
STEP: delete the pod
Sep 18 02:43:16.064: INFO: Waiting for pod downwardapi-volume-af485f79-3418-4ad4-b436-ce966533363c to disappear
Sep 18 02:43:16.105: INFO: Pod downwardapi-volume-af485f79-3418-4ad4-b436-ce966533363c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:43:16.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1603" for this suite.
Sep 18 02:43:22.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:43:22.351: INFO: namespace projected-1603 deletion completed in 6.235420811s

• [SLOW TEST:12.785 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:43:22.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:43:27.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3008" for this suite.
Sep 18 02:43:34.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:43:34.271: INFO: namespace watch-3008 deletion completed in 6.294533575s

• [SLOW TEST:11.918 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:43:34.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 18 02:43:34.396: INFO: Waiting up to 5m0s for pod "pod-56743fb7-0f0e-45f6-a02b-463a3ec54c5f" in namespace "emptydir-9600" to be "success or failure"
Sep 18 02:43:34.415: INFO: Pod "pod-56743fb7-0f0e-45f6-a02b-463a3ec54c5f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.962736ms
Sep 18 02:43:36.432: INFO: Pod "pod-56743fb7-0f0e-45f6-a02b-463a3ec54c5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035667597s
Sep 18 02:43:38.439: INFO: Pod "pod-56743fb7-0f0e-45f6-a02b-463a3ec54c5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042406209s
STEP: Saw pod success
Sep 18 02:43:38.439: INFO: Pod "pod-56743fb7-0f0e-45f6-a02b-463a3ec54c5f" satisfied condition "success or failure"
Sep 18 02:43:38.444: INFO: Trying to get logs from node iruya-worker pod pod-56743fb7-0f0e-45f6-a02b-463a3ec54c5f container test-container: 
STEP: delete the pod
Sep 18 02:43:38.458: INFO: Waiting for pod pod-56743fb7-0f0e-45f6-a02b-463a3ec54c5f to disappear
Sep 18 02:43:38.461: INFO: Pod pod-56743fb7-0f0e-45f6-a02b-463a3ec54c5f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:43:38.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9600" for this suite.
Sep 18 02:43:44.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:43:44.636: INFO: namespace emptydir-9600 deletion completed in 6.16610405s

• [SLOW TEST:10.364 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:43:44.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 02:43:44.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d8f56ba-effc-46cd-82b6-bd2bd73d7eed" in namespace "downward-api-5343" to be "success or failure"
Sep 18 02:43:44.751: INFO: Pod "downwardapi-volume-4d8f56ba-effc-46cd-82b6-bd2bd73d7eed": Phase="Pending", Reason="", readiness=false. Elapsed: 13.2537ms
Sep 18 02:43:48.463: INFO: Pod "downwardapi-volume-4d8f56ba-effc-46cd-82b6-bd2bd73d7eed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.725432807s
Sep 18 02:43:50.471: INFO: Pod "downwardapi-volume-4d8f56ba-effc-46cd-82b6-bd2bd73d7eed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.733148231s
STEP: Saw pod success
Sep 18 02:43:50.471: INFO: Pod "downwardapi-volume-4d8f56ba-effc-46cd-82b6-bd2bd73d7eed" satisfied condition "success or failure"
Sep 18 02:43:50.475: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4d8f56ba-effc-46cd-82b6-bd2bd73d7eed container client-container: 
STEP: delete the pod
Sep 18 02:43:50.506: INFO: Waiting for pod downwardapi-volume-4d8f56ba-effc-46cd-82b6-bd2bd73d7eed to disappear
Sep 18 02:43:50.513: INFO: Pod downwardapi-volume-4d8f56ba-effc-46cd-82b6-bd2bd73d7eed no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:43:50.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5343" for this suite.
Sep 18 02:43:56.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:43:56.816: INFO: namespace downward-api-5343 deletion completed in 6.295430868s

• [SLOW TEST:12.176 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:43:56.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:44:02.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3142" for this suite.
Sep 18 02:44:42.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:44:43.138: INFO: namespace kubelet-test-3142 deletion completed in 40.15795715s

• [SLOW TEST:46.321 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:44:43.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 02:44:43.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-632e5ca1-7e5a-4584-81d6-bdb54f3458c5" in namespace "projected-6767" to be "success or failure"
Sep 18 02:44:43.242: INFO: Pod "downwardapi-volume-632e5ca1-7e5a-4584-81d6-bdb54f3458c5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.195462ms
Sep 18 02:44:45.255: INFO: Pod "downwardapi-volume-632e5ca1-7e5a-4584-81d6-bdb54f3458c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018258591s
Sep 18 02:44:47.262: INFO: Pod "downwardapi-volume-632e5ca1-7e5a-4584-81d6-bdb54f3458c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02571506s
STEP: Saw pod success
Sep 18 02:44:47.263: INFO: Pod "downwardapi-volume-632e5ca1-7e5a-4584-81d6-bdb54f3458c5" satisfied condition "success or failure"
Sep 18 02:44:47.269: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-632e5ca1-7e5a-4584-81d6-bdb54f3458c5 container client-container: 
STEP: delete the pod
Sep 18 02:44:47.324: INFO: Waiting for pod downwardapi-volume-632e5ca1-7e5a-4584-81d6-bdb54f3458c5 to disappear
Sep 18 02:44:47.340: INFO: Pod downwardapi-volume-632e5ca1-7e5a-4584-81d6-bdb54f3458c5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:44:47.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6767" for this suite.
Sep 18 02:44:53.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:44:53.519: INFO: namespace projected-6767 deletion completed in 6.167700728s

• [SLOW TEST:10.381 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:44:53.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0918 02:45:23.704024       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 18 02:45:23.705: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:45:23.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4326" for this suite.
Sep 18 02:45:29.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:45:29.964: INFO: namespace gc-4326 deletion completed in 6.250284059s

• [SLOW TEST:36.442 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:45:29.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Sep 18 02:45:30.051: INFO: Waiting up to 5m0s for pod "var-expansion-dabcce2b-f66d-4080-9c04-82c52d8489d2" in namespace "var-expansion-6563" to be "success or failure"
Sep 18 02:45:30.107: INFO: Pod "var-expansion-dabcce2b-f66d-4080-9c04-82c52d8489d2": Phase="Pending", Reason="", readiness=false. Elapsed: 56.369041ms
Sep 18 02:45:32.115: INFO: Pod "var-expansion-dabcce2b-f66d-4080-9c04-82c52d8489d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06365842s
Sep 18 02:45:34.122: INFO: Pod "var-expansion-dabcce2b-f66d-4080-9c04-82c52d8489d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070635549s
STEP: Saw pod success
Sep 18 02:45:34.122: INFO: Pod "var-expansion-dabcce2b-f66d-4080-9c04-82c52d8489d2" satisfied condition "success or failure"
Sep 18 02:45:34.128: INFO: Trying to get logs from node iruya-worker pod var-expansion-dabcce2b-f66d-4080-9c04-82c52d8489d2 container dapi-container: 
STEP: delete the pod
Sep 18 02:45:34.197: INFO: Waiting for pod var-expansion-dabcce2b-f66d-4080-9c04-82c52d8489d2 to disappear
Sep 18 02:45:34.209: INFO: Pod var-expansion-dabcce2b-f66d-4080-9c04-82c52d8489d2 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:45:34.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6563" for this suite.
Sep 18 02:45:40.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:45:40.357: INFO: namespace var-expansion-6563 deletion completed in 6.138187797s

• [SLOW TEST:10.391 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:45:40.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-709c2033-10a3-41a8-ac9b-168b9a80cb01
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-709c2033-10a3-41a8-ac9b-168b9a80cb01
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:45:48.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3145" for this suite.
Sep 18 02:46:10.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:46:10.789: INFO: namespace configmap-3145 deletion completed in 22.193784441s

• [SLOW TEST:30.430 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:46:10.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Sep 18 02:46:14.930: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Sep 18 02:46:26.082: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:46:26.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9271" for this suite.
Sep 18 02:46:32.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:46:32.356: INFO: namespace pods-9271 deletion completed in 6.260609684s

• [SLOW TEST:21.567 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:46:32.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-744c97a8-733d-49dc-9df3-0a9c1d438692
STEP: Creating a pod to test consume configMaps
Sep 18 02:46:32.540: INFO: Waiting up to 5m0s for pod "pod-configmaps-850ee3d0-6bcd-4fdf-bfdd-8027afacee13" in namespace "configmap-8021" to be "success or failure"
Sep 18 02:46:32.553: INFO: Pod "pod-configmaps-850ee3d0-6bcd-4fdf-bfdd-8027afacee13": Phase="Pending", Reason="", readiness=false. Elapsed: 13.509405ms
Sep 18 02:46:34.561: INFO: Pod "pod-configmaps-850ee3d0-6bcd-4fdf-bfdd-8027afacee13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02079188s
Sep 18 02:46:36.567: INFO: Pod "pod-configmaps-850ee3d0-6bcd-4fdf-bfdd-8027afacee13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027669438s
STEP: Saw pod success
Sep 18 02:46:36.568: INFO: Pod "pod-configmaps-850ee3d0-6bcd-4fdf-bfdd-8027afacee13" satisfied condition "success or failure"
Sep 18 02:46:36.572: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-850ee3d0-6bcd-4fdf-bfdd-8027afacee13 container configmap-volume-test: 
STEP: delete the pod
Sep 18 02:46:36.693: INFO: Waiting for pod pod-configmaps-850ee3d0-6bcd-4fdf-bfdd-8027afacee13 to disappear
Sep 18 02:46:36.701: INFO: Pod pod-configmaps-850ee3d0-6bcd-4fdf-bfdd-8027afacee13 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:46:36.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8021" for this suite.
Sep 18 02:46:42.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:46:42.854: INFO: namespace configmap-8021 deletion completed in 6.143785221s

• [SLOW TEST:10.496 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:46:42.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 18 02:46:42.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3323'
Sep 18 02:46:44.099: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 18 02:46:44.099: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Sep 18 02:46:46.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3323'
Sep 18 02:46:47.310: INFO: stderr: ""
Sep 18 02:46:47.310: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:46:47.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3323" for this suite.
Sep 18 02:47:09.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:47:09.468: INFO: namespace kubectl-3323 deletion completed in 22.147985626s

• [SLOW TEST:26.609 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:47:09.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 02:47:09.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb4f2da2-c975-480e-a180-44f2307ee6d6" in namespace "downward-api-9760" to be "success or failure"
Sep 18 02:47:09.572: INFO: Pod "downwardapi-volume-bb4f2da2-c975-480e-a180-44f2307ee6d6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.9036ms
Sep 18 02:47:11.579: INFO: Pod "downwardapi-volume-bb4f2da2-c975-480e-a180-44f2307ee6d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030335555s
Sep 18 02:47:13.596: INFO: Pod "downwardapi-volume-bb4f2da2-c975-480e-a180-44f2307ee6d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046559367s
STEP: Saw pod success
Sep 18 02:47:13.596: INFO: Pod "downwardapi-volume-bb4f2da2-c975-480e-a180-44f2307ee6d6" satisfied condition "success or failure"
Sep 18 02:47:13.603: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-bb4f2da2-c975-480e-a180-44f2307ee6d6 container client-container: 
STEP: delete the pod
Sep 18 02:47:13.638: INFO: Waiting for pod downwardapi-volume-bb4f2da2-c975-480e-a180-44f2307ee6d6 to disappear
Sep 18 02:47:13.686: INFO: Pod downwardapi-volume-bb4f2da2-c975-480e-a180-44f2307ee6d6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:47:13.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9760" for this suite.
Sep 18 02:47:19.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:47:19.856: INFO: namespace downward-api-9760 deletion completed in 6.159374403s

• [SLOW TEST:10.382 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:47:19.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Sep 18 02:47:19.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Sep 18 02:47:21.037: INFO: stderr: ""
Sep 18 02:47:21.038: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43279\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43279/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:47:21.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-325" for this suite.
Sep 18 02:47:27.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:47:27.224: INFO: namespace kubectl-325 deletion completed in 6.174567964s

• [SLOW TEST:7.366 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:47:27.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-c64cfbbc-a5b0-4e82-b68d-3a98fa6acd59 in namespace container-probe-1380
Sep 18 02:47:31.326: INFO: Started pod liveness-c64cfbbc-a5b0-4e82-b68d-3a98fa6acd59 in namespace container-probe-1380
STEP: checking the pod's current state and verifying that restartCount is present
Sep 18 02:47:31.331: INFO: Initial restart count of pod liveness-c64cfbbc-a5b0-4e82-b68d-3a98fa6acd59 is 0
Sep 18 02:47:55.421: INFO: Restart count of pod container-probe-1380/liveness-c64cfbbc-a5b0-4e82-b68d-3a98fa6acd59 is now 1 (24.089491335s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:47:55.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1380" for this suite.
Sep 18 02:48:01.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:48:01.616: INFO: namespace container-probe-1380 deletion completed in 6.166778797s

• [SLOW TEST:34.388 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:48:01.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Sep 18 02:48:02.326: INFO: Pod name wrapped-volume-race-c2e89e4c-51c0-433a-addc-89166ddf3578: Found 0 pods out of 5
Sep 18 02:48:07.360: INFO: Pod name wrapped-volume-race-c2e89e4c-51c0-433a-addc-89166ddf3578: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c2e89e4c-51c0-433a-addc-89166ddf3578 in namespace emptydir-wrapper-7858, will wait for the garbage collector to delete the pods
Sep 18 02:48:19.528: INFO: Deleting ReplicationController wrapped-volume-race-c2e89e4c-51c0-433a-addc-89166ddf3578 took: 9.999622ms
Sep 18 02:48:19.829: INFO: Terminating ReplicationController wrapped-volume-race-c2e89e4c-51c0-433a-addc-89166ddf3578 pods took: 301.023724ms
STEP: Creating RC which spawns configmap-volume pods
Sep 18 02:49:05.708: INFO: Pod name wrapped-volume-race-4125cccc-15f9-43e6-ac83-ba906b33d35b: Found 0 pods out of 5
Sep 18 02:49:10.732: INFO: Pod name wrapped-volume-race-4125cccc-15f9-43e6-ac83-ba906b33d35b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4125cccc-15f9-43e6-ac83-ba906b33d35b in namespace emptydir-wrapper-7858, will wait for the garbage collector to delete the pods
Sep 18 02:49:24.884: INFO: Deleting ReplicationController wrapped-volume-race-4125cccc-15f9-43e6-ac83-ba906b33d35b took: 9.431563ms
Sep 18 02:49:25.485: INFO: Terminating ReplicationController wrapped-volume-race-4125cccc-15f9-43e6-ac83-ba906b33d35b pods took: 600.705117ms
STEP: Creating RC which spawns configmap-volume pods
Sep 18 02:50:05.027: INFO: Pod name wrapped-volume-race-ae0a29c4-618f-4815-92c2-afc8b2c64759: Found 0 pods out of 5
Sep 18 02:50:10.050: INFO: Pod name wrapped-volume-race-ae0a29c4-618f-4815-92c2-afc8b2c64759: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ae0a29c4-618f-4815-92c2-afc8b2c64759 in namespace emptydir-wrapper-7858, will wait for the garbage collector to delete the pods
Sep 18 02:50:24.163: INFO: Deleting ReplicationController wrapped-volume-race-ae0a29c4-618f-4815-92c2-afc8b2c64759 took: 8.793365ms
Sep 18 02:50:24.464: INFO: Terminating ReplicationController wrapped-volume-race-ae0a29c4-618f-4815-92c2-afc8b2c64759 pods took: 300.832181ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:51:05.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7858" for this suite.
Sep 18 02:51:13.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:51:13.439: INFO: namespace emptydir-wrapper-7858 deletion completed in 8.15695337s

• [SLOW TEST:191.821 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:51:13.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3319
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep 18 02:51:13.486: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep 18 02:51:39.720: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.236 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3319 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 02:51:39.720: INFO: >>> kubeConfig: /root/.kube/config
I0918 02:51:39.821664       7 log.go:172] (0x79db0a0) (0x6dc03f0) Create stream
I0918 02:51:39.821904       7 log.go:172] (0x79db0a0) (0x6dc03f0) Stream added, broadcasting: 1
I0918 02:51:39.826623       7 log.go:172] (0x79db0a0) Reply frame received for 1
I0918 02:51:39.826857       7 log.go:172] (0x79db0a0) (0x7d9b340) Create stream
I0918 02:51:39.826957       7 log.go:172] (0x79db0a0) (0x7d9b340) Stream added, broadcasting: 3
I0918 02:51:39.828596       7 log.go:172] (0x79db0a0) Reply frame received for 3
I0918 02:51:39.828812       7 log.go:172] (0x79db0a0) (0x6dc0af0) Create stream
I0918 02:51:39.828919       7 log.go:172] (0x79db0a0) (0x6dc0af0) Stream added, broadcasting: 5
I0918 02:51:39.830623       7 log.go:172] (0x79db0a0) Reply frame received for 5
I0918 02:51:40.890270       7 log.go:172] (0x79db0a0) Data frame received for 3
I0918 02:51:40.890566       7 log.go:172] (0x7d9b340) (3) Data frame handling
I0918 02:51:40.890763       7 log.go:172] (0x79db0a0) Data frame received for 5
I0918 02:51:40.891031       7 log.go:172] (0x6dc0af0) (5) Data frame handling
I0918 02:51:40.891555       7 log.go:172] (0x7d9b340) (3) Data frame sent
I0918 02:51:40.892058       7 log.go:172] (0x79db0a0) Data frame received for 3
I0918 02:51:40.892399       7 log.go:172] (0x7d9b340) (3) Data frame handling
I0918 02:51:40.892818       7 log.go:172] (0x79db0a0) Data frame received for 1
I0918 02:51:40.893049       7 log.go:172] (0x6dc03f0) (1) Data frame handling
I0918 02:51:40.893277       7 log.go:172] (0x6dc03f0) (1) Data frame sent
I0918 02:51:40.893448       7 log.go:172] (0x79db0a0) (0x6dc03f0) Stream removed, broadcasting: 1
I0918 02:51:40.893664       7 log.go:172] (0x79db0a0) Go away received
I0918 02:51:40.894126       7 log.go:172] (0x79db0a0) (0x6dc03f0) Stream removed, broadcasting: 1
I0918 02:51:40.894284       7 log.go:172] (0x79db0a0) (0x7d9b340) Stream removed, broadcasting: 3
I0918 02:51:40.894425       7 log.go:172] (0x79db0a0) (0x6dc0af0) Stream removed, broadcasting: 5
Sep 18 02:51:40.895: INFO: Found all expected endpoints: [netserver-0]
Sep 18 02:51:40.901: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.26 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3319 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 02:51:40.902: INFO: >>> kubeConfig: /root/.kube/config
I0918 02:51:41.006268       7 log.go:172] (0x7c069a0) (0x7c06a80) Create stream
I0918 02:51:41.006408       7 log.go:172] (0x7c069a0) (0x7c06a80) Stream added, broadcasting: 1
I0918 02:51:41.011225       7 log.go:172] (0x7c069a0) Reply frame received for 1
I0918 02:51:41.011594       7 log.go:172] (0x7c069a0) (0x7c06b60) Create stream
I0918 02:51:41.011752       7 log.go:172] (0x7c069a0) (0x7c06b60) Stream added, broadcasting: 3
I0918 02:51:41.013817       7 log.go:172] (0x7c069a0) Reply frame received for 3
I0918 02:51:41.013959       7 log.go:172] (0x7c069a0) (0x7c06c40) Create stream
I0918 02:51:41.014035       7 log.go:172] (0x7c069a0) (0x7c06c40) Stream added, broadcasting: 5
I0918 02:51:41.015636       7 log.go:172] (0x7c069a0) Reply frame received for 5
I0918 02:51:42.081515       7 log.go:172] (0x7c069a0) Data frame received for 3
I0918 02:51:42.081773       7 log.go:172] (0x7c06b60) (3) Data frame handling
I0918 02:51:42.081982       7 log.go:172] (0x7c069a0) Data frame received for 5
I0918 02:51:42.082283       7 log.go:172] (0x7c06c40) (5) Data frame handling
I0918 02:51:42.082480       7 log.go:172] (0x7c06b60) (3) Data frame sent
I0918 02:51:42.082661       7 log.go:172] (0x7c069a0) Data frame received for 3
I0918 02:51:42.082785       7 log.go:172] (0x7c06b60) (3) Data frame handling
I0918 02:51:42.083793       7 log.go:172] (0x7c069a0) Data frame received for 1
I0918 02:51:42.083941       7 log.go:172] (0x7c06a80) (1) Data frame handling
I0918 02:51:42.084075       7 log.go:172] (0x7c06a80) (1) Data frame sent
I0918 02:51:42.084342       7 log.go:172] (0x7c069a0) (0x7c06a80) Stream removed, broadcasting: 1
I0918 02:51:42.084524       7 log.go:172] (0x7c069a0) Go away received
I0918 02:51:42.084906       7 log.go:172] (0x7c069a0) (0x7c06a80) Stream removed, broadcasting: 1
I0918 02:51:42.085038       7 log.go:172] (0x7c069a0) (0x7c06b60) Stream removed, broadcasting: 3
I0918 02:51:42.085141       7 log.go:172] (0x7c069a0) (0x7c06c40) Stream removed, broadcasting: 5
Sep 18 02:51:42.085: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:51:42.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3319" for this suite.
Sep 18 02:52:04.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:52:04.255: INFO: namespace pod-network-test-3319 deletion completed in 22.160321638s

• [SLOW TEST:50.814 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:52:04.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-132ae432-1607-49fd-909a-c69db224b5be
STEP: Creating a pod to test consume secrets
Sep 18 02:52:04.377: INFO: Waiting up to 5m0s for pod "pod-secrets-defee286-544c-4435-a283-8d25a2855fb8" in namespace "secrets-5144" to be "success or failure"
Sep 18 02:52:04.396: INFO: Pod "pod-secrets-defee286-544c-4435-a283-8d25a2855fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.973317ms
Sep 18 02:52:06.405: INFO: Pod "pod-secrets-defee286-544c-4435-a283-8d25a2855fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027409889s
Sep 18 02:52:08.411: INFO: Pod "pod-secrets-defee286-544c-4435-a283-8d25a2855fb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033630412s
STEP: Saw pod success
Sep 18 02:52:08.411: INFO: Pod "pod-secrets-defee286-544c-4435-a283-8d25a2855fb8" satisfied condition "success or failure"
Sep 18 02:52:08.415: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-defee286-544c-4435-a283-8d25a2855fb8 container secret-volume-test: 
STEP: delete the pod
Sep 18 02:52:08.482: INFO: Waiting for pod pod-secrets-defee286-544c-4435-a283-8d25a2855fb8 to disappear
Sep 18 02:52:08.501: INFO: Pod pod-secrets-defee286-544c-4435-a283-8d25a2855fb8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:52:08.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5144" for this suite.
Sep 18 02:52:14.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:52:14.665: INFO: namespace secrets-5144 deletion completed in 6.155317694s

• [SLOW TEST:10.408 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:52:14.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-8682ce33-f987-4c4f-99c5-8683fef947e8
STEP: Creating a pod to test consume configMaps
Sep 18 02:52:14.788: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6e00738-afd8-449d-bf8e-11018d6636f7" in namespace "configmap-4508" to be "success or failure"
Sep 18 02:52:14.800: INFO: Pod "pod-configmaps-f6e00738-afd8-449d-bf8e-11018d6636f7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.161295ms
Sep 18 02:52:16.807: INFO: Pod "pod-configmaps-f6e00738-afd8-449d-bf8e-11018d6636f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018769831s
Sep 18 02:52:18.813: INFO: Pod "pod-configmaps-f6e00738-afd8-449d-bf8e-11018d6636f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025490256s
STEP: Saw pod success
Sep 18 02:52:18.814: INFO: Pod "pod-configmaps-f6e00738-afd8-449d-bf8e-11018d6636f7" satisfied condition "success or failure"
Sep 18 02:52:18.819: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f6e00738-afd8-449d-bf8e-11018d6636f7 container configmap-volume-test: 
STEP: delete the pod
Sep 18 02:52:18.856: INFO: Waiting for pod pod-configmaps-f6e00738-afd8-449d-bf8e-11018d6636f7 to disappear
Sep 18 02:52:18.872: INFO: Pod pod-configmaps-f6e00738-afd8-449d-bf8e-11018d6636f7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:52:18.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4508" for this suite.
Sep 18 02:52:24.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:52:25.106: INFO: namespace configmap-4508 deletion completed in 6.226415793s

• [SLOW TEST:10.438 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:52:25.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-650b9cb1-7dc2-4ccd-9f27-4b714264081f
STEP: Creating a pod to test consume secrets
Sep 18 02:52:25.212: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-045ec82e-cd4c-4811-8449-cbce15105b91" in namespace "projected-7232" to be "success or failure"
Sep 18 02:52:25.226: INFO: Pod "pod-projected-secrets-045ec82e-cd4c-4811-8449-cbce15105b91": Phase="Pending", Reason="", readiness=false. Elapsed: 13.392516ms
Sep 18 02:52:27.233: INFO: Pod "pod-projected-secrets-045ec82e-cd4c-4811-8449-cbce15105b91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020133142s
Sep 18 02:52:29.240: INFO: Pod "pod-projected-secrets-045ec82e-cd4c-4811-8449-cbce15105b91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027535231s
STEP: Saw pod success
Sep 18 02:52:29.240: INFO: Pod "pod-projected-secrets-045ec82e-cd4c-4811-8449-cbce15105b91" satisfied condition "success or failure"
Sep 18 02:52:29.246: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-045ec82e-cd4c-4811-8449-cbce15105b91 container projected-secret-volume-test: 
STEP: delete the pod
Sep 18 02:52:29.269: INFO: Waiting for pod pod-projected-secrets-045ec82e-cd4c-4811-8449-cbce15105b91 to disappear
Sep 18 02:52:29.287: INFO: Pod pod-projected-secrets-045ec82e-cd4c-4811-8449-cbce15105b91 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:52:29.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7232" for this suite.
Sep 18 02:52:35.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:52:35.477: INFO: namespace projected-7232 deletion completed in 6.177953406s

• [SLOW TEST:10.367 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:52:35.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Sep 18 02:52:35.547: INFO: Waiting up to 5m0s for pod "var-expansion-e838601f-3204-4a7e-ae2f-0603b8208f2b" in namespace "var-expansion-1136" to be "success or failure"
Sep 18 02:52:35.555: INFO: Pod "var-expansion-e838601f-3204-4a7e-ae2f-0603b8208f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424634ms
Sep 18 02:52:37.563: INFO: Pod "var-expansion-e838601f-3204-4a7e-ae2f-0603b8208f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016534106s
Sep 18 02:52:39.570: INFO: Pod "var-expansion-e838601f-3204-4a7e-ae2f-0603b8208f2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023529037s
STEP: Saw pod success
Sep 18 02:52:39.571: INFO: Pod "var-expansion-e838601f-3204-4a7e-ae2f-0603b8208f2b" satisfied condition "success or failure"
Sep 18 02:52:39.575: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-e838601f-3204-4a7e-ae2f-0603b8208f2b container dapi-container: 
STEP: delete the pod
Sep 18 02:52:39.650: INFO: Waiting for pod var-expansion-e838601f-3204-4a7e-ae2f-0603b8208f2b to disappear
Sep 18 02:52:39.655: INFO: Pod var-expansion-e838601f-3204-4a7e-ae2f-0603b8208f2b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:52:39.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1136" for this suite.
Sep 18 02:52:45.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:52:45.798: INFO: namespace var-expansion-1136 deletion completed in 6.135402114s

• [SLOW TEST:10.320 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:52:45.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Sep 18 02:52:52.616: INFO: 9 pods remaining
Sep 18 02:52:52.616: INFO: 0 pods has nil DeletionTimestamp
Sep 18 02:52:52.616: INFO: 
Sep 18 02:52:53.560: INFO: 0 pods remaining
Sep 18 02:52:53.560: INFO: 0 pods has nil DeletionTimestamp
Sep 18 02:52:53.560: INFO: 
Sep 18 02:52:55.192: INFO: 0 pods remaining
Sep 18 02:52:55.192: INFO: 0 pods has nil DeletionTimestamp
Sep 18 02:52:55.193: INFO: 
STEP: Gathering metrics
W0918 02:52:56.213463       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 18 02:52:56.213: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:52:56.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2258" for this suite.
Sep 18 02:53:02.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:53:02.429: INFO: namespace gc-2258 deletion completed in 6.206679629s

• [SLOW TEST:16.629 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:53:02.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Sep 18 02:53:09.584: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:53:09.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1094" for this suite.
Sep 18 02:53:31.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:53:31.932: INFO: namespace replicaset-1094 deletion completed in 22.298495659s

• [SLOW TEST:29.501 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:53:31.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 02:53:32.061: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Sep 18 02:53:32.083: INFO: Number of nodes with available pods: 0
Sep 18 02:53:32.083: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Sep 18 02:53:32.126: INFO: Number of nodes with available pods: 0
Sep 18 02:53:32.126: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 02:53:33.136: INFO: Number of nodes with available pods: 0
Sep 18 02:53:33.136: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 02:53:34.134: INFO: Number of nodes with available pods: 0
Sep 18 02:53:34.134: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 02:53:35.135: INFO: Number of nodes with available pods: 1
Sep 18 02:53:35.135: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Sep 18 02:53:35.183: INFO: Number of nodes with available pods: 1
Sep 18 02:53:35.183: INFO: Number of running nodes: 0, number of available pods: 1
Sep 18 02:53:36.190: INFO: Number of nodes with available pods: 0
Sep 18 02:53:36.190: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Sep 18 02:53:36.223: INFO: Number of nodes with available pods: 0
Sep 18 02:53:36.223: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 02:53:37.231: INFO: Number of nodes with available pods: 0
Sep 18 02:53:37.231: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 02:53:38.231: INFO: Number of nodes with available pods: 0
Sep 18 02:53:38.231: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 02:53:39.231: INFO: Number of nodes with available pods: 0
Sep 18 02:53:39.232: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 02:53:40.235: INFO: Number of nodes with available pods: 0
Sep 18 02:53:40.235: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 02:53:41.231: INFO: Number of nodes with available pods: 0
Sep 18 02:53:41.231: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 02:53:42.231: INFO: Number of nodes with available pods: 1
Sep 18 02:53:42.231: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-11, will wait for the garbage collector to delete the pods
Sep 18 02:53:42.307: INFO: Deleting DaemonSet.extensions daemon-set took: 8.15079ms
Sep 18 02:53:42.609: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.064393ms
Sep 18 02:53:47.015: INFO: Number of nodes with available pods: 0
Sep 18 02:53:47.015: INFO: Number of running nodes: 0, number of available pods: 0
Sep 18 02:53:47.024: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-11/daemonsets","resourceVersion":"789500"},"items":null}

Sep 18 02:53:47.028: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-11/pods","resourceVersion":"789500"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:53:47.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-11" for this suite.
Sep 18 02:53:53.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:53:53.299: INFO: namespace daemonsets-11 deletion completed in 6.191294011s

• [SLOW TEST:21.363 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:53:53.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Sep 18 02:53:53.405: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:54:04.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5385" for this suite.
Sep 18 02:54:10.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:54:10.645: INFO: namespace pods-5385 deletion completed in 6.135806174s

• [SLOW TEST:17.343 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:54:10.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Sep 18 02:54:10.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6820'
Sep 18 02:54:14.953: INFO: stderr: ""
Sep 18 02:54:14.953: INFO: stdout: "pod/pause created\n"
Sep 18 02:54:14.954: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Sep 18 02:54:14.954: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6820" to be "running and ready"
Sep 18 02:54:15.014: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 59.213371ms
Sep 18 02:54:17.086: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131473713s
Sep 18 02:54:19.093: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138157147s
Sep 18 02:54:21.099: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.144697455s
Sep 18 02:54:21.099: INFO: Pod "pause" satisfied condition "running and ready"
Sep 18 02:54:21.100: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Sep 18 02:54:21.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6820'
Sep 18 02:54:22.405: INFO: stderr: ""
Sep 18 02:54:22.405: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Sep 18 02:54:22.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6820'
Sep 18 02:54:23.540: INFO: stderr: ""
Sep 18 02:54:23.540: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Sep 18 02:54:23.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6820'
Sep 18 02:54:24.669: INFO: stderr: ""
Sep 18 02:54:24.669: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Sep 18 02:54:24.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6820'
Sep 18 02:54:25.890: INFO: stderr: ""
Sep 18 02:54:25.890: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Sep 18 02:54:25.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6820'
Sep 18 02:54:27.018: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 18 02:54:27.019: INFO: stdout: "pod \"pause\" force deleted\n"
Sep 18 02:54:27.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6820'
Sep 18 02:54:28.275: INFO: stderr: "No resources found.\n"
Sep 18 02:54:28.275: INFO: stdout: ""
Sep 18 02:54:28.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6820 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep 18 02:54:29.442: INFO: stderr: ""
Sep 18 02:54:29.443: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:54:29.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6820" for this suite.
Sep 18 02:54:35.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:54:36.074: INFO: namespace kubectl-6820 deletion completed in 6.62559015s

• [SLOW TEST:25.429 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:54:36.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-87808340-679a-4c78-9c55-02485e785a55
STEP: Creating a pod to test consume configMaps
Sep 18 02:54:36.212: INFO: Waiting up to 5m0s for pod "pod-configmaps-414c34a8-d200-45b2-b90a-ad05c5b6233e" in namespace "configmap-3404" to be "success or failure"
Sep 18 02:54:36.223: INFO: Pod "pod-configmaps-414c34a8-d200-45b2-b90a-ad05c5b6233e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.866841ms
Sep 18 02:54:38.231: INFO: Pod "pod-configmaps-414c34a8-d200-45b2-b90a-ad05c5b6233e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018261203s
Sep 18 02:54:40.237: INFO: Pod "pod-configmaps-414c34a8-d200-45b2-b90a-ad05c5b6233e": Phase="Running", Reason="", readiness=true. Elapsed: 4.024867664s
Sep 18 02:54:42.245: INFO: Pod "pod-configmaps-414c34a8-d200-45b2-b90a-ad05c5b6233e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032535148s
STEP: Saw pod success
Sep 18 02:54:42.245: INFO: Pod "pod-configmaps-414c34a8-d200-45b2-b90a-ad05c5b6233e" satisfied condition "success or failure"
Sep 18 02:54:42.250: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-414c34a8-d200-45b2-b90a-ad05c5b6233e container configmap-volume-test: 
STEP: delete the pod
Sep 18 02:54:42.271: INFO: Waiting for pod pod-configmaps-414c34a8-d200-45b2-b90a-ad05c5b6233e to disappear
Sep 18 02:54:42.276: INFO: Pod pod-configmaps-414c34a8-d200-45b2-b90a-ad05c5b6233e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:54:42.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3404" for this suite.
Sep 18 02:54:48.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:54:48.453: INFO: namespace configmap-3404 deletion completed in 6.166559086s

• [SLOW TEST:12.378 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:54:48.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 02:54:48.565: INFO: Waiting up to 5m0s for pod "downwardapi-volume-380b30a8-c832-49d7-a974-33f62d52b535" in namespace "downward-api-9668" to be "success or failure"
Sep 18 02:54:48.610: INFO: Pod "downwardapi-volume-380b30a8-c832-49d7-a974-33f62d52b535": Phase="Pending", Reason="", readiness=false. Elapsed: 44.412382ms
Sep 18 02:54:50.614: INFO: Pod "downwardapi-volume-380b30a8-c832-49d7-a974-33f62d52b535": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049139799s
Sep 18 02:54:52.620: INFO: Pod "downwardapi-volume-380b30a8-c832-49d7-a974-33f62d52b535": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054691871s
STEP: Saw pod success
Sep 18 02:54:52.620: INFO: Pod "downwardapi-volume-380b30a8-c832-49d7-a974-33f62d52b535" satisfied condition "success or failure"
Sep 18 02:54:52.624: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-380b30a8-c832-49d7-a974-33f62d52b535 container client-container: 
STEP: delete the pod
Sep 18 02:54:52.722: INFO: Waiting for pod downwardapi-volume-380b30a8-c832-49d7-a974-33f62d52b535 to disappear
Sep 18 02:54:52.726: INFO: Pod downwardapi-volume-380b30a8-c832-49d7-a974-33f62d52b535 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:54:52.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9668" for this suite.
Sep 18 02:54:58.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:54:58.882: INFO: namespace downward-api-9668 deletion completed in 6.147554818s

• [SLOW TEST:10.426 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:54:58.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Sep 18 02:55:07.301: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 18 02:55:07.315: INFO: Pod pod-with-prestop-http-hook still exists
Sep 18 02:55:09.316: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 18 02:55:09.322: INFO: Pod pod-with-prestop-http-hook still exists
Sep 18 02:55:11.316: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 18 02:55:11.323: INFO: Pod pod-with-prestop-http-hook still exists
Sep 18 02:55:13.316: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 18 02:55:13.323: INFO: Pod pod-with-prestop-http-hook still exists
Sep 18 02:55:15.316: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 18 02:55:15.323: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:55:15.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5294" for this suite.
Sep 18 02:55:37.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:55:37.501: INFO: namespace container-lifecycle-hook-5294 deletion completed in 22.159455833s

• [SLOW TEST:38.618 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:55:37.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 02:55:37.555: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:55:38.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9592" for this suite.
Sep 18 02:55:44.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:55:44.931: INFO: namespace custom-resource-definition-9592 deletion completed in 6.176530716s

• [SLOW TEST:7.425 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:55:44.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-s555
STEP: Creating a pod to test atomic-volume-subpath
Sep 18 02:55:45.060: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-s555" in namespace "subpath-8250" to be "success or failure"
Sep 18 02:55:45.088: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Pending", Reason="", readiness=false. Elapsed: 27.98483ms
Sep 18 02:55:47.106: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045551248s
Sep 18 02:55:49.128: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 4.068153333s
Sep 18 02:55:51.147: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 6.086417652s
Sep 18 02:55:53.163: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 8.103336271s
Sep 18 02:55:55.170: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 10.109902516s
Sep 18 02:55:57.175: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 12.114932826s
Sep 18 02:55:59.182: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 14.122330225s
Sep 18 02:56:01.189: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 16.128758773s
Sep 18 02:56:03.196: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 18.135918341s
Sep 18 02:56:05.202: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 20.142258711s
Sep 18 02:56:07.207: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 22.147225708s
Sep 18 02:56:09.214: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Running", Reason="", readiness=true. Elapsed: 24.154183824s
Sep 18 02:56:11.221: INFO: Pod "pod-subpath-test-downwardapi-s555": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.16079503s
STEP: Saw pod success
Sep 18 02:56:11.221: INFO: Pod "pod-subpath-test-downwardapi-s555" satisfied condition "success or failure"
Sep 18 02:56:11.226: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-s555 container test-container-subpath-downwardapi-s555: 
STEP: delete the pod
Sep 18 02:56:11.298: INFO: Waiting for pod pod-subpath-test-downwardapi-s555 to disappear
Sep 18 02:56:11.303: INFO: Pod pod-subpath-test-downwardapi-s555 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-s555
Sep 18 02:56:11.303: INFO: Deleting pod "pod-subpath-test-downwardapi-s555" in namespace "subpath-8250"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 02:56:11.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8250" for this suite.
Sep 18 02:56:17.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 02:56:17.471: INFO: namespace subpath-8250 deletion completed in 6.156930585s

• [SLOW TEST:32.538 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 02:56:17.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-33fffd98-e0a8-4b9b-92df-744acf6e642b in namespace container-probe-698
Sep 18 02:56:21.593: INFO: Started pod busybox-33fffd98-e0a8-4b9b-92df-744acf6e642b in namespace container-probe-698
STEP: checking the pod's current state and verifying that restartCount is present
Sep 18 02:56:21.599: INFO: Initial restart count of pod busybox-33fffd98-e0a8-4b9b-92df-744acf6e642b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:00:22.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-698" for this suite.
Sep 18 03:00:28.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:00:28.693: INFO: namespace container-probe-698 deletion completed in 6.188613197s

• [SLOW TEST:251.220 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:00:28.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:00:28.790: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Sep 18 03:00:28.844: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:28.871: INFO: Number of nodes with available pods: 0
Sep 18 03:00:28.871: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:00:29.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:29.889: INFO: Number of nodes with available pods: 0
Sep 18 03:00:29.889: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:00:30.992: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:30.998: INFO: Number of nodes with available pods: 0
Sep 18 03:00:30.998: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:00:31.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:31.890: INFO: Number of nodes with available pods: 0
Sep 18 03:00:31.890: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:00:32.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:32.890: INFO: Number of nodes with available pods: 2
Sep 18 03:00:32.890: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Sep 18 03:00:32.973: INFO: Wrong image for pod: daemon-set-6bg2c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:32.973: INFO: Wrong image for pod: daemon-set-tc65k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:32.994: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:34.003: INFO: Wrong image for pod: daemon-set-6bg2c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:34.003: INFO: Wrong image for pod: daemon-set-tc65k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:34.013: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:35.003: INFO: Wrong image for pod: daemon-set-6bg2c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:35.003: INFO: Wrong image for pod: daemon-set-tc65k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:35.012: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:36.002: INFO: Wrong image for pod: daemon-set-6bg2c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:36.003: INFO: Wrong image for pod: daemon-set-tc65k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:36.003: INFO: Pod daemon-set-tc65k is not available
Sep 18 03:00:36.012: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:37.002: INFO: Wrong image for pod: daemon-set-6bg2c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:37.002: INFO: Pod daemon-set-j48df is not available
Sep 18 03:00:37.012: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:38.003: INFO: Wrong image for pod: daemon-set-6bg2c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:38.003: INFO: Pod daemon-set-j48df is not available
Sep 18 03:00:38.013: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:39.001: INFO: Wrong image for pod: daemon-set-6bg2c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:39.001: INFO: Pod daemon-set-j48df is not available
Sep 18 03:00:39.010: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:40.006: INFO: Wrong image for pod: daemon-set-6bg2c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:40.014: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:41.003: INFO: Wrong image for pod: daemon-set-6bg2c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:41.011: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:42.003: INFO: Wrong image for pod: daemon-set-6bg2c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 18 03:00:42.003: INFO: Pod daemon-set-6bg2c is not available
Sep 18 03:00:42.015: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:43.003: INFO: Pod daemon-set-jjkzj is not available
Sep 18 03:00:43.012: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Sep 18 03:00:43.021: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:43.025: INFO: Number of nodes with available pods: 1
Sep 18 03:00:43.025: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:00:44.038: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:44.045: INFO: Number of nodes with available pods: 1
Sep 18 03:00:44.045: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:00:45.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:45.770: INFO: Number of nodes with available pods: 1
Sep 18 03:00:45.770: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:00:46.036: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:46.042: INFO: Number of nodes with available pods: 1
Sep 18 03:00:46.042: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:00:47.038: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:47.082: INFO: Number of nodes with available pods: 1
Sep 18 03:00:47.082: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:00:48.037: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:00:48.044: INFO: Number of nodes with available pods: 2
Sep 18 03:00:48.044: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2393, will wait for the garbage collector to delete the pods
Sep 18 03:00:48.143: INFO: Deleting DaemonSet.extensions daemon-set took: 9.360331ms
Sep 18 03:00:48.444: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.963595ms
Sep 18 03:00:54.650: INFO: Number of nodes with available pods: 0
Sep 18 03:00:54.650: INFO: Number of running nodes: 0, number of available pods: 0
Sep 18 03:00:54.655: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2393/daemonsets","resourceVersion":"790647"},"items":null}

Sep 18 03:00:54.659: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2393/pods","resourceVersion":"790647"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:00:54.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2393" for this suite.
Sep 18 03:01:00.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:01:00.878: INFO: namespace daemonsets-2393 deletion completed in 6.189226404s

• [SLOW TEST:32.183 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:01:00.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-1861
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1861
STEP: Deleting pre-stop pod
Sep 18 03:01:14.072: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:01:14.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1861" for this suite.
Sep 18 03:01:52.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:01:52.295: INFO: namespace prestop-1861 deletion completed in 38.187135905s

• [SLOW TEST:51.415 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:01:52.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-b49f0290-87ce-483c-9af9-ac7a04c349a0
STEP: Creating a pod to test consume secrets
Sep 18 03:01:52.399: INFO: Waiting up to 5m0s for pod "pod-secrets-69787c41-bf25-4f03-a772-a8eccaa64de5" in namespace "secrets-67" to be "success or failure"
Sep 18 03:01:52.408: INFO: Pod "pod-secrets-69787c41-bf25-4f03-a772-a8eccaa64de5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.800511ms
Sep 18 03:01:54.415: INFO: Pod "pod-secrets-69787c41-bf25-4f03-a772-a8eccaa64de5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015247443s
Sep 18 03:01:56.431: INFO: Pod "pod-secrets-69787c41-bf25-4f03-a772-a8eccaa64de5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03172002s
STEP: Saw pod success
Sep 18 03:01:56.432: INFO: Pod "pod-secrets-69787c41-bf25-4f03-a772-a8eccaa64de5" satisfied condition "success or failure"
Sep 18 03:01:56.437: INFO: Trying to get logs from node iruya-worker pod pod-secrets-69787c41-bf25-4f03-a772-a8eccaa64de5 container secret-volume-test: 
STEP: delete the pod
Sep 18 03:01:56.464: INFO: Waiting for pod pod-secrets-69787c41-bf25-4f03-a772-a8eccaa64de5 to disappear
Sep 18 03:01:56.474: INFO: Pod pod-secrets-69787c41-bf25-4f03-a772-a8eccaa64de5 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:01:56.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-67" for this suite.
Sep 18 03:02:02.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:02:02.641: INFO: namespace secrets-67 deletion completed in 6.1589614s

• [SLOW TEST:10.343 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:02:02.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-746a584c-43df-4183-b9ce-ce957d061e61
STEP: Creating a pod to test consume secrets
Sep 18 03:02:02.742: INFO: Waiting up to 5m0s for pod "pod-secrets-3bff457d-d618-403d-9308-5158506c71f4" in namespace "secrets-4456" to be "success or failure"
Sep 18 03:02:02.786: INFO: Pod "pod-secrets-3bff457d-d618-403d-9308-5158506c71f4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.344241ms
Sep 18 03:02:04.794: INFO: Pod "pod-secrets-3bff457d-d618-403d-9308-5158506c71f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051715399s
Sep 18 03:02:06.800: INFO: Pod "pod-secrets-3bff457d-d618-403d-9308-5158506c71f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057827221s
STEP: Saw pod success
Sep 18 03:02:06.800: INFO: Pod "pod-secrets-3bff457d-d618-403d-9308-5158506c71f4" satisfied condition "success or failure"
Sep 18 03:02:06.804: INFO: Trying to get logs from node iruya-worker pod pod-secrets-3bff457d-d618-403d-9308-5158506c71f4 container secret-volume-test: 
STEP: delete the pod
Sep 18 03:02:06.824: INFO: Waiting for pod pod-secrets-3bff457d-d618-403d-9308-5158506c71f4 to disappear
Sep 18 03:02:06.827: INFO: Pod pod-secrets-3bff457d-d618-403d-9308-5158506c71f4 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:02:06.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4456" for this suite.
Sep 18 03:02:12.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:02:13.008: INFO: namespace secrets-4456 deletion completed in 6.17335811s

• [SLOW TEST:10.366 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:02:13.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 18 03:02:16.159: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:02:16.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4897" for this suite.
Sep 18 03:02:22.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:02:22.465: INFO: namespace container-runtime-4897 deletion completed in 6.2646969s

• [SLOW TEST:9.451 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:02:22.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:02:22.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6342" for this suite.
Sep 18 03:02:28.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:02:28.799: INFO: namespace kubelet-test-6342 deletion completed in 6.175314719s

• [SLOW TEST:6.333 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:02:28.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-c812c685-65d4-468e-8dd0-00bc50e8cbe8
STEP: Creating configMap with name cm-test-opt-upd-879e7aac-54df-4240-a917-aba0fd623bc9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-c812c685-65d4-468e-8dd0-00bc50e8cbe8
STEP: Updating configmap cm-test-opt-upd-879e7aac-54df-4240-a917-aba0fd623bc9
STEP: Creating configMap with name cm-test-opt-create-02d81372-38c3-4909-b5ae-40d52cb10720
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:02:37.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1446" for this suite.
Sep 18 03:02:59.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:02:59.196: INFO: namespace projected-1446 deletion completed in 22.161085143s

• [SLOW TEST:30.395 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:02:59.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4194
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Sep 18 03:02:59.359: INFO: Found 0 stateful pods, waiting for 3
Sep 18 03:03:09.367: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 03:03:09.367: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 03:03:09.367: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Sep 18 03:03:19.369: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 03:03:19.369: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 03:03:19.369: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Sep 18 03:03:19.432: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Sep 18 03:03:29.509: INFO: Updating stateful set ss2
Sep 18 03:03:29.542: INFO: Waiting for Pod statefulset-4194/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Sep 18 03:03:39.679: INFO: Found 2 stateful pods, waiting for 3
Sep 18 03:03:49.688: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 03:03:49.688: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 18 03:03:49.688: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Sep 18 03:03:49.720: INFO: Updating stateful set ss2
Sep 18 03:03:49.754: INFO: Waiting for Pod statefulset-4194/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Sep 18 03:03:59.794: INFO: Updating stateful set ss2
Sep 18 03:03:59.849: INFO: Waiting for StatefulSet statefulset-4194/ss2 to complete update
Sep 18 03:03:59.850: INFO: Waiting for Pod statefulset-4194/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep 18 03:04:09.866: INFO: Deleting all statefulset in ns statefulset-4194
Sep 18 03:04:09.871: INFO: Scaling statefulset ss2 to 0
Sep 18 03:04:29.926: INFO: Waiting for statefulset status.replicas updated to 0
Sep 18 03:04:29.930: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:04:29.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4194" for this suite.
Sep 18 03:04:35.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:04:36.137: INFO: namespace statefulset-4194 deletion completed in 6.178656391s

• [SLOW TEST:96.940 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:04:36.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:04:36.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7675" for this suite.
Sep 18 03:04:42.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:04:42.373: INFO: namespace services-7675 deletion completed in 6.149114118s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.233 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:04:42.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0918 03:04:43.178965       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 18 03:04:43.179: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:04:43.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3011" for this suite.
Sep 18 03:04:49.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:04:49.421: INFO: namespace gc-3011 deletion completed in 6.208497781s

• [SLOW TEST:7.047 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:04:49.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-5a7f8c5d-058a-4252-b830-19a0945e61d8
STEP: Creating a pod to test consume secrets
Sep 18 03:04:49.518: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fd8e502c-d008-42a0-8f86-28b66d3f8651" in namespace "projected-3497" to be "success or failure"
Sep 18 03:04:49.524: INFO: Pod "pod-projected-secrets-fd8e502c-d008-42a0-8f86-28b66d3f8651": Phase="Pending", Reason="", readiness=false. Elapsed: 5.38199ms
Sep 18 03:04:51.531: INFO: Pod "pod-projected-secrets-fd8e502c-d008-42a0-8f86-28b66d3f8651": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012450872s
Sep 18 03:04:53.539: INFO: Pod "pod-projected-secrets-fd8e502c-d008-42a0-8f86-28b66d3f8651": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019772448s
STEP: Saw pod success
Sep 18 03:04:53.539: INFO: Pod "pod-projected-secrets-fd8e502c-d008-42a0-8f86-28b66d3f8651" satisfied condition "success or failure"
Sep 18 03:04:53.544: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-fd8e502c-d008-42a0-8f86-28b66d3f8651 container projected-secret-volume-test: 
STEP: delete the pod
Sep 18 03:04:53.574: INFO: Waiting for pod pod-projected-secrets-fd8e502c-d008-42a0-8f86-28b66d3f8651 to disappear
Sep 18 03:04:53.590: INFO: Pod pod-projected-secrets-fd8e502c-d008-42a0-8f86-28b66d3f8651 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:04:53.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3497" for this suite.
Sep 18 03:04:59.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:04:59.828: INFO: namespace projected-3497 deletion completed in 6.229629274s

• [SLOW TEST:10.403 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:04:59.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 18 03:04:59.908: INFO: Waiting up to 5m0s for pod "pod-7fe8b9ab-c72f-4df3-a04a-5470a2caa1ee" in namespace "emptydir-973" to be "success or failure"
Sep 18 03:04:59.922: INFO: Pod "pod-7fe8b9ab-c72f-4df3-a04a-5470a2caa1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 13.987168ms
Sep 18 03:05:01.929: INFO: Pod "pod-7fe8b9ab-c72f-4df3-a04a-5470a2caa1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021119331s
Sep 18 03:05:03.936: INFO: Pod "pod-7fe8b9ab-c72f-4df3-a04a-5470a2caa1ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028482573s
STEP: Saw pod success
Sep 18 03:05:03.937: INFO: Pod "pod-7fe8b9ab-c72f-4df3-a04a-5470a2caa1ee" satisfied condition "success or failure"
Sep 18 03:05:03.942: INFO: Trying to get logs from node iruya-worker2 pod pod-7fe8b9ab-c72f-4df3-a04a-5470a2caa1ee container test-container: 
STEP: delete the pod
Sep 18 03:05:03.982: INFO: Waiting for pod pod-7fe8b9ab-c72f-4df3-a04a-5470a2caa1ee to disappear
Sep 18 03:05:04.004: INFO: Pod pod-7fe8b9ab-c72f-4df3-a04a-5470a2caa1ee no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:05:04.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-973" for this suite.
Sep 18 03:05:10.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:05:10.171: INFO: namespace emptydir-973 deletion completed in 6.156568688s

• [SLOW TEST:10.343 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:05:10.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b00a5d04-b94e-48ed-a0dc-6614f7e1f4e8
STEP: Creating a pod to test consume secrets
Sep 18 03:05:10.322: INFO: Waiting up to 5m0s for pod "pod-secrets-1e8dc078-9367-4538-be50-fa00d3d14493" in namespace "secrets-4480" to be "success or failure"
Sep 18 03:05:10.334: INFO: Pod "pod-secrets-1e8dc078-9367-4538-be50-fa00d3d14493": Phase="Pending", Reason="", readiness=false. Elapsed: 11.82811ms
Sep 18 03:05:12.346: INFO: Pod "pod-secrets-1e8dc078-9367-4538-be50-fa00d3d14493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024025816s
Sep 18 03:05:14.354: INFO: Pod "pod-secrets-1e8dc078-9367-4538-be50-fa00d3d14493": Phase="Running", Reason="", readiness=true. Elapsed: 4.031592099s
Sep 18 03:05:16.362: INFO: Pod "pod-secrets-1e8dc078-9367-4538-be50-fa00d3d14493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039722065s
STEP: Saw pod success
Sep 18 03:05:16.363: INFO: Pod "pod-secrets-1e8dc078-9367-4538-be50-fa00d3d14493" satisfied condition "success or failure"
Sep 18 03:05:16.369: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-1e8dc078-9367-4538-be50-fa00d3d14493 container secret-env-test: 
STEP: delete the pod
Sep 18 03:05:16.394: INFO: Waiting for pod pod-secrets-1e8dc078-9367-4538-be50-fa00d3d14493 to disappear
Sep 18 03:05:16.419: INFO: Pod pod-secrets-1e8dc078-9367-4538-be50-fa00d3d14493 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:05:16.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4480" for this suite.
Sep 18 03:05:22.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:05:22.605: INFO: namespace secrets-4480 deletion completed in 6.176046461s

• [SLOW TEST:12.428 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:05:22.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:05:22.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37445a08-1159-41bc-bec4-007702e5c1a0" in namespace "downward-api-9438" to be "success or failure"
Sep 18 03:05:22.722: INFO: Pod "downwardapi-volume-37445a08-1159-41bc-bec4-007702e5c1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.018978ms
Sep 18 03:05:24.730: INFO: Pod "downwardapi-volume-37445a08-1159-41bc-bec4-007702e5c1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02525792s
Sep 18 03:05:26.737: INFO: Pod "downwardapi-volume-37445a08-1159-41bc-bec4-007702e5c1a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032272067s
STEP: Saw pod success
Sep 18 03:05:26.737: INFO: Pod "downwardapi-volume-37445a08-1159-41bc-bec4-007702e5c1a0" satisfied condition "success or failure"
Sep 18 03:05:26.741: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-37445a08-1159-41bc-bec4-007702e5c1a0 container client-container: 
STEP: delete the pod
Sep 18 03:05:26.944: INFO: Waiting for pod downwardapi-volume-37445a08-1159-41bc-bec4-007702e5c1a0 to disappear
Sep 18 03:05:26.950: INFO: Pod downwardapi-volume-37445a08-1159-41bc-bec4-007702e5c1a0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:05:26.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9438" for this suite.
Sep 18 03:05:32.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:05:33.126: INFO: namespace downward-api-9438 deletion completed in 6.159004119s

• [SLOW TEST:10.519 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:05:33.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Sep 18 03:05:41.369: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:05:41.394: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:05:43.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:05:43.403: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:05:45.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:05:45.402: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:05:47.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:05:47.404: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:05:49.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:05:49.402: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:05:51.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:05:51.402: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:05:53.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:05:53.403: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:05:55.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:05:55.403: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:05:57.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:05:57.401: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:05:59.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:05:59.400: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:06:01.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:06:01.402: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:06:03.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:06:03.403: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 18 03:06:05.395: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 18 03:06:05.403: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:06:05.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-532" for this suite.
Sep 18 03:06:27.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:06:27.573: INFO: namespace container-lifecycle-hook-532 deletion completed in 22.153667954s

• [SLOW TEST:54.444 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:06:27.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-6d59e579-b17b-4d66-a542-3f59f682a101
STEP: Creating a pod to test consume secrets
Sep 18 03:06:27.717: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be22a59b-ad4b-4480-9929-2159c704602d" in namespace "projected-3774" to be "success or failure"
Sep 18 03:06:27.737: INFO: Pod "pod-projected-secrets-be22a59b-ad4b-4480-9929-2159c704602d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.334063ms
Sep 18 03:06:29.743: INFO: Pod "pod-projected-secrets-be22a59b-ad4b-4480-9929-2159c704602d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026172047s
Sep 18 03:06:31.769: INFO: Pod "pod-projected-secrets-be22a59b-ad4b-4480-9929-2159c704602d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052040285s
STEP: Saw pod success
Sep 18 03:06:31.769: INFO: Pod "pod-projected-secrets-be22a59b-ad4b-4480-9929-2159c704602d" satisfied condition "success or failure"
Sep 18 03:06:31.775: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-be22a59b-ad4b-4480-9929-2159c704602d container projected-secret-volume-test: 
STEP: delete the pod
Sep 18 03:06:31.811: INFO: Waiting for pod pod-projected-secrets-be22a59b-ad4b-4480-9929-2159c704602d to disappear
Sep 18 03:06:31.826: INFO: Pod pod-projected-secrets-be22a59b-ad4b-4480-9929-2159c704602d no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:06:31.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3774" for this suite.
Sep 18 03:06:37.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:06:38.053: INFO: namespace projected-3774 deletion completed in 6.2189368s

• [SLOW TEST:10.479 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:06:38.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7697
I0918 03:06:38.214170       7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7697, replica count: 1
I0918 03:06:39.267774       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0918 03:06:40.269435       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0918 03:06:41.271172       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0918 03:06:42.272541       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 18 03:06:42.410: INFO: Created: latency-svc-9mtrb
Sep 18 03:06:42.426: INFO: Got endpoints: latency-svc-9mtrb [52.032178ms]
Sep 18 03:06:42.494: INFO: Created: latency-svc-q2594
Sep 18 03:06:42.509: INFO: Got endpoints: latency-svc-q2594 [81.400554ms]
Sep 18 03:06:42.530: INFO: Created: latency-svc-gtlmj
Sep 18 03:06:42.545: INFO: Got endpoints: latency-svc-gtlmj [117.919951ms]
Sep 18 03:06:42.601: INFO: Created: latency-svc-ssp8p
Sep 18 03:06:42.604: INFO: Got endpoints: latency-svc-ssp8p [177.141659ms]
Sep 18 03:06:42.631: INFO: Created: latency-svc-pshgl
Sep 18 03:06:42.641: INFO: Got endpoints: latency-svc-pshgl [213.495067ms]
Sep 18 03:06:42.661: INFO: Created: latency-svc-wgkz8
Sep 18 03:06:42.671: INFO: Got endpoints: latency-svc-wgkz8 [242.623294ms]
Sep 18 03:06:42.698: INFO: Created: latency-svc-27lhk
Sep 18 03:06:42.733: INFO: Got endpoints: latency-svc-27lhk [305.419481ms]
Sep 18 03:06:42.758: INFO: Created: latency-svc-fvxbf
Sep 18 03:06:42.774: INFO: Got endpoints: latency-svc-fvxbf [345.350925ms]
Sep 18 03:06:42.794: INFO: Created: latency-svc-22mnd
Sep 18 03:06:42.825: INFO: Got endpoints: latency-svc-22mnd [397.535242ms]
Sep 18 03:06:42.886: INFO: Created: latency-svc-zbh52
Sep 18 03:06:42.889: INFO: Got endpoints: latency-svc-zbh52 [460.742955ms]
Sep 18 03:06:42.919: INFO: Created: latency-svc-m7fhd
Sep 18 03:06:42.930: INFO: Got endpoints: latency-svc-m7fhd [503.03377ms]
Sep 18 03:06:42.955: INFO: Created: latency-svc-qbg67
Sep 18 03:06:42.966: INFO: Got endpoints: latency-svc-qbg67 [538.460907ms]
Sep 18 03:06:43.020: INFO: Created: latency-svc-55tls
Sep 18 03:06:43.023: INFO: Got endpoints: latency-svc-55tls [594.601316ms]
Sep 18 03:06:43.052: INFO: Created: latency-svc-p6w5h
Sep 18 03:06:43.069: INFO: Got endpoints: latency-svc-p6w5h [640.879738ms]
Sep 18 03:06:43.088: INFO: Created: latency-svc-t87z2
Sep 18 03:06:43.105: INFO: Got endpoints: latency-svc-t87z2 [678.124555ms]
Sep 18 03:06:43.169: INFO: Created: latency-svc-n7hsl
Sep 18 03:06:43.173: INFO: Got endpoints: latency-svc-n7hsl [744.868889ms]
Sep 18 03:06:43.206: INFO: Created: latency-svc-qw6dq
Sep 18 03:06:43.219: INFO: Got endpoints: latency-svc-qw6dq [710.140672ms]
Sep 18 03:06:43.244: INFO: Created: latency-svc-r7pbf
Sep 18 03:06:43.262: INFO: Got endpoints: latency-svc-r7pbf [716.229374ms]
Sep 18 03:06:43.307: INFO: Created: latency-svc-jc7qw
Sep 18 03:06:43.310: INFO: Got endpoints: latency-svc-jc7qw [705.506109ms]
Sep 18 03:06:43.375: INFO: Created: latency-svc-w6qfc
Sep 18 03:06:43.389: INFO: Got endpoints: latency-svc-w6qfc [747.7932ms]
Sep 18 03:06:43.447: INFO: Created: latency-svc-t9gdx
Sep 18 03:06:43.458: INFO: Got endpoints: latency-svc-t9gdx [787.195501ms]
Sep 18 03:06:43.490: INFO: Created: latency-svc-4h25p
Sep 18 03:06:43.503: INFO: Got endpoints: latency-svc-4h25p [770.317621ms]
Sep 18 03:06:43.526: INFO: Created: latency-svc-98jwj
Sep 18 03:06:43.539: INFO: Got endpoints: latency-svc-98jwj [765.119562ms]
Sep 18 03:06:43.581: INFO: Created: latency-svc-n6bzq
Sep 18 03:06:43.594: INFO: Got endpoints: latency-svc-n6bzq [768.550345ms]
Sep 18 03:06:43.626: INFO: Created: latency-svc-zsm9l
Sep 18 03:06:43.642: INFO: Got endpoints: latency-svc-zsm9l [752.660059ms]
Sep 18 03:06:43.669: INFO: Created: latency-svc-l2sxp
Sep 18 03:06:43.738: INFO: Got endpoints: latency-svc-l2sxp [807.17025ms]
Sep 18 03:06:43.741: INFO: Created: latency-svc-bgb6n
Sep 18 03:06:43.745: INFO: Got endpoints: latency-svc-bgb6n [778.386352ms]
Sep 18 03:06:43.771: INFO: Created: latency-svc-gh8js
Sep 18 03:06:43.786: INFO: Got endpoints: latency-svc-gh8js [762.972487ms]
Sep 18 03:06:43.807: INFO: Created: latency-svc-xthjs
Sep 18 03:06:43.823: INFO: Got endpoints: latency-svc-xthjs [753.256079ms]
Sep 18 03:06:43.882: INFO: Created: latency-svc-vgchn
Sep 18 03:06:43.885: INFO: Got endpoints: latency-svc-vgchn [779.184402ms]
Sep 18 03:06:43.933: INFO: Created: latency-svc-rfdff
Sep 18 03:06:43.957: INFO: Got endpoints: latency-svc-rfdff [783.588851ms]
Sep 18 03:06:44.020: INFO: Created: latency-svc-6jgxq
Sep 18 03:06:44.023: INFO: Got endpoints: latency-svc-6jgxq [803.531579ms]
Sep 18 03:06:44.048: INFO: Created: latency-svc-n8pxx
Sep 18 03:06:44.064: INFO: Got endpoints: latency-svc-n8pxx [801.957472ms]
Sep 18 03:06:44.083: INFO: Created: latency-svc-5mrdj
Sep 18 03:06:44.100: INFO: Got endpoints: latency-svc-5mrdj [789.699862ms]
Sep 18 03:06:44.164: INFO: Created: latency-svc-pwzmt
Sep 18 03:06:44.166: INFO: Got endpoints: latency-svc-pwzmt [777.01415ms]
Sep 18 03:06:44.203: INFO: Created: latency-svc-n9mdw
Sep 18 03:06:44.221: INFO: Got endpoints: latency-svc-n9mdw [762.727578ms]
Sep 18 03:06:44.251: INFO: Created: latency-svc-bjs42
Sep 18 03:06:44.258: INFO: Got endpoints: latency-svc-bjs42 [754.014392ms]
Sep 18 03:06:44.308: INFO: Created: latency-svc-cxqqh
Sep 18 03:06:44.335: INFO: Got endpoints: latency-svc-cxqqh [796.064875ms]
Sep 18 03:06:44.395: INFO: Created: latency-svc-dd6fc
Sep 18 03:06:44.433: INFO: Got endpoints: latency-svc-dd6fc [838.680771ms]
Sep 18 03:06:44.450: INFO: Created: latency-svc-s59xt
Sep 18 03:06:44.467: INFO: Got endpoints: latency-svc-s59xt [825.274917ms]
Sep 18 03:06:44.491: INFO: Created: latency-svc-rvxl2
Sep 18 03:06:44.510: INFO: Got endpoints: latency-svc-rvxl2 [771.817605ms]
Sep 18 03:06:44.596: INFO: Created: latency-svc-vtj6b
Sep 18 03:06:44.599: INFO: Got endpoints: latency-svc-vtj6b [853.323239ms]
Sep 18 03:06:44.647: INFO: Created: latency-svc-sbr9x
Sep 18 03:06:44.660: INFO: Got endpoints: latency-svc-sbr9x [873.79586ms]
Sep 18 03:06:44.683: INFO: Created: latency-svc-dtmc5
Sep 18 03:06:44.726: INFO: Got endpoints: latency-svc-dtmc5 [903.008326ms]
Sep 18 03:06:44.749: INFO: Created: latency-svc-9k42d
Sep 18 03:06:44.778: INFO: Got endpoints: latency-svc-9k42d [893.147286ms]
Sep 18 03:06:44.808: INFO: Created: latency-svc-8pd8q
Sep 18 03:06:44.852: INFO: Got endpoints: latency-svc-8pd8q [894.690969ms]
Sep 18 03:06:44.862: INFO: Created: latency-svc-cmmzx
Sep 18 03:06:44.877: INFO: Got endpoints: latency-svc-cmmzx [853.260766ms]
Sep 18 03:06:44.898: INFO: Created: latency-svc-zhds4
Sep 18 03:06:44.907: INFO: Got endpoints: latency-svc-zhds4 [842.686832ms]
Sep 18 03:06:44.929: INFO: Created: latency-svc-c8plz
Sep 18 03:06:44.943: INFO: Got endpoints: latency-svc-c8plz [842.909271ms]
Sep 18 03:06:44.996: INFO: Created: latency-svc-rbwn6
Sep 18 03:06:44.999: INFO: Got endpoints: latency-svc-rbwn6 [832.813015ms]
Sep 18 03:06:45.033: INFO: Created: latency-svc-fh9md
Sep 18 03:06:45.046: INFO: Got endpoints: latency-svc-fh9md [824.145784ms]
Sep 18 03:06:45.066: INFO: Created: latency-svc-pdcfq
Sep 18 03:06:45.091: INFO: Got endpoints: latency-svc-pdcfq [833.536814ms]
Sep 18 03:06:45.145: INFO: Created: latency-svc-x2b5x
Sep 18 03:06:45.148: INFO: Got endpoints: latency-svc-x2b5x [812.664068ms]
Sep 18 03:06:45.175: INFO: Created: latency-svc-z7x4c
Sep 18 03:06:45.190: INFO: Got endpoints: latency-svc-z7x4c [757.525153ms]
Sep 18 03:06:45.211: INFO: Created: latency-svc-h5r78
Sep 18 03:06:45.226: INFO: Got endpoints: latency-svc-h5r78 [758.701119ms]
Sep 18 03:06:45.289: INFO: Created: latency-svc-65rkt
Sep 18 03:06:45.319: INFO: Created: latency-svc-rhkpc
Sep 18 03:06:45.319: INFO: Got endpoints: latency-svc-65rkt [808.842327ms]
Sep 18 03:06:45.335: INFO: Got endpoints: latency-svc-rhkpc [736.233515ms]
Sep 18 03:06:45.360: INFO: Created: latency-svc-xlq99
Sep 18 03:06:45.383: INFO: Got endpoints: latency-svc-xlq99 [723.31616ms]
Sep 18 03:06:45.433: INFO: Created: latency-svc-bklwm
Sep 18 03:06:45.436: INFO: Got endpoints: latency-svc-bklwm [709.472255ms]
Sep 18 03:06:45.483: INFO: Created: latency-svc-4vqsc
Sep 18 03:06:45.498: INFO: Got endpoints: latency-svc-4vqsc [719.5443ms]
Sep 18 03:06:45.517: INFO: Created: latency-svc-82zkc
Sep 18 03:06:45.558: INFO: Got endpoints: latency-svc-82zkc [705.686297ms]
Sep 18 03:06:45.570: INFO: Created: latency-svc-2zhbk
Sep 18 03:06:45.594: INFO: Got endpoints: latency-svc-2zhbk [717.511516ms]
Sep 18 03:06:45.618: INFO: Created: latency-svc-kmt4h
Sep 18 03:06:45.630: INFO: Got endpoints: latency-svc-kmt4h [723.254697ms]
Sep 18 03:06:45.655: INFO: Created: latency-svc-k898v
Sep 18 03:06:45.696: INFO: Got endpoints: latency-svc-k898v [752.45649ms]
Sep 18 03:06:45.719: INFO: Created: latency-svc-t9knl
Sep 18 03:06:45.751: INFO: Got endpoints: latency-svc-t9knl [751.459251ms]
Sep 18 03:06:45.774: INFO: Created: latency-svc-r8qh7
Sep 18 03:06:45.787: INFO: Got endpoints: latency-svc-r8qh7 [741.143738ms]
Sep 18 03:06:45.853: INFO: Created: latency-svc-gb6r6
Sep 18 03:06:45.858: INFO: Got endpoints: latency-svc-gb6r6 [766.265652ms]
Sep 18 03:06:45.907: INFO: Created: latency-svc-5q824
Sep 18 03:06:45.919: INFO: Got endpoints: latency-svc-5q824 [771.084579ms]
Sep 18 03:06:45.943: INFO: Created: latency-svc-4nfgw
Sep 18 03:06:45.990: INFO: Got endpoints: latency-svc-4nfgw [799.130401ms]
Sep 18 03:06:45.996: INFO: Created: latency-svc-dw4gh
Sep 18 03:06:46.010: INFO: Got endpoints: latency-svc-dw4gh [783.043798ms]
Sep 18 03:06:46.031: INFO: Created: latency-svc-g6f8g
Sep 18 03:06:46.040: INFO: Got endpoints: latency-svc-g6f8g [720.549592ms]
Sep 18 03:06:46.063: INFO: Created: latency-svc-pwgn9
Sep 18 03:06:46.086: INFO: Got endpoints: latency-svc-pwgn9 [750.653424ms]
Sep 18 03:06:46.153: INFO: Created: latency-svc-xx4vx
Sep 18 03:06:46.195: INFO: Got endpoints: latency-svc-xx4vx [811.122393ms]
Sep 18 03:06:46.195: INFO: Created: latency-svc-lf4bh
Sep 18 03:06:46.221: INFO: Got endpoints: latency-svc-lf4bh [785.53386ms]
Sep 18 03:06:46.332: INFO: Created: latency-svc-dd7n9
Sep 18 03:06:46.357: INFO: Got endpoints: latency-svc-dd7n9 [858.556418ms]
Sep 18 03:06:46.357: INFO: Created: latency-svc-567dc
Sep 18 03:06:46.371: INFO: Got endpoints: latency-svc-567dc [813.103357ms]
Sep 18 03:06:46.404: INFO: Created: latency-svc-p8hcf
Sep 18 03:06:46.420: INFO: Got endpoints: latency-svc-p8hcf [825.118978ms]
Sep 18 03:06:46.475: INFO: Created: latency-svc-dw24h
Sep 18 03:06:46.512: INFO: Got endpoints: latency-svc-dw24h [881.2872ms]
Sep 18 03:06:46.562: INFO: Created: latency-svc-kdzgf
Sep 18 03:06:46.566: INFO: Got endpoints: latency-svc-kdzgf [869.784383ms]
Sep 18 03:06:46.607: INFO: Created: latency-svc-znpqf
Sep 18 03:06:46.614: INFO: Got endpoints: latency-svc-znpqf [863.534193ms]
Sep 18 03:06:46.638: INFO: Created: latency-svc-28zl2
Sep 18 03:06:46.652: INFO: Got endpoints: latency-svc-28zl2 [865.049277ms]
Sep 18 03:06:46.744: INFO: Created: latency-svc-dlxkg
Sep 18 03:06:46.747: INFO: Got endpoints: latency-svc-dlxkg [888.496309ms]
Sep 18 03:06:46.801: INFO: Created: latency-svc-kj6ds
Sep 18 03:06:46.813: INFO: Got endpoints: latency-svc-kj6ds [893.582693ms]
Sep 18 03:06:46.837: INFO: Created: latency-svc-s928d
Sep 18 03:06:46.899: INFO: Got endpoints: latency-svc-s928d [909.517325ms]
Sep 18 03:06:46.938: INFO: Created: latency-svc-sbzgq
Sep 18 03:06:46.952: INFO: Got endpoints: latency-svc-sbzgq [942.256688ms]
Sep 18 03:06:46.973: INFO: Created: latency-svc-7vmgs
Sep 18 03:06:46.988: INFO: Got endpoints: latency-svc-7vmgs [947.973226ms]
Sep 18 03:06:47.038: INFO: Created: latency-svc-c85bz
Sep 18 03:06:47.042: INFO: Got endpoints: latency-svc-c85bz [955.782106ms]
Sep 18 03:06:47.065: INFO: Created: latency-svc-lpx9n
Sep 18 03:06:47.073: INFO: Got endpoints: latency-svc-lpx9n [877.310542ms]
Sep 18 03:06:47.094: INFO: Created: latency-svc-d8nmf
Sep 18 03:06:47.103: INFO: Got endpoints: latency-svc-d8nmf [880.785894ms]
Sep 18 03:06:47.130: INFO: Created: latency-svc-2dkn6
Sep 18 03:06:47.169: INFO: Got endpoints: latency-svc-2dkn6 [811.864109ms]
Sep 18 03:06:47.184: INFO: Created: latency-svc-ssgc7
Sep 18 03:06:47.226: INFO: Got endpoints: latency-svc-ssgc7 [854.249156ms]
Sep 18 03:06:47.269: INFO: Created: latency-svc-dmxrl
Sep 18 03:06:47.307: INFO: Got endpoints: latency-svc-dmxrl [886.391101ms]
Sep 18 03:06:47.316: INFO: Created: latency-svc-vgtq2
Sep 18 03:06:47.332: INFO: Got endpoints: latency-svc-vgtq2 [820.249244ms]
Sep 18 03:06:47.359: INFO: Created: latency-svc-g658z
Sep 18 03:06:47.368: INFO: Got endpoints: latency-svc-g658z [802.149662ms]
Sep 18 03:06:47.395: INFO: Created: latency-svc-2btn8
Sep 18 03:06:47.439: INFO: Got endpoints: latency-svc-2btn8 [823.889446ms]
Sep 18 03:06:47.454: INFO: Created: latency-svc-597h8
Sep 18 03:06:47.471: INFO: Got endpoints: latency-svc-597h8 [818.664968ms]
Sep 18 03:06:47.496: INFO: Created: latency-svc-6gvmt
Sep 18 03:06:47.513: INFO: Got endpoints: latency-svc-6gvmt [766.499602ms]
Sep 18 03:06:47.538: INFO: Created: latency-svc-vwnj8
Sep 18 03:06:47.600: INFO: Got endpoints: latency-svc-vwnj8 [786.848277ms]
Sep 18 03:06:47.607: INFO: Created: latency-svc-m8hmk
Sep 18 03:06:47.640: INFO: Got endpoints: latency-svc-m8hmk [740.287192ms]
Sep 18 03:06:47.677: INFO: Created: latency-svc-gpqhj
Sep 18 03:06:47.694: INFO: Got endpoints: latency-svc-gpqhj [741.853242ms]
Sep 18 03:06:47.745: INFO: Created: latency-svc-6w6nr
Sep 18 03:06:47.754: INFO: Got endpoints: latency-svc-6w6nr [765.606246ms]
Sep 18 03:06:47.785: INFO: Created: latency-svc-2whqv
Sep 18 03:06:47.796: INFO: Got endpoints: latency-svc-2whqv [754.018552ms]
Sep 18 03:06:47.826: INFO: Created: latency-svc-fn92k
Sep 18 03:06:47.870: INFO: Got endpoints: latency-svc-fn92k [796.908641ms]
Sep 18 03:06:47.885: INFO: Created: latency-svc-8mcwb
Sep 18 03:06:47.899: INFO: Got endpoints: latency-svc-8mcwb [796.50083ms]
Sep 18 03:06:47.958: INFO: Created: latency-svc-pz6wl
Sep 18 03:06:48.002: INFO: Got endpoints: latency-svc-pz6wl [832.631834ms]
Sep 18 03:06:48.013: INFO: Created: latency-svc-7fdzb
Sep 18 03:06:48.026: INFO: Got endpoints: latency-svc-7fdzb [799.125572ms]
Sep 18 03:06:48.048: INFO: Created: latency-svc-p5vqv
Sep 18 03:06:48.062: INFO: Got endpoints: latency-svc-p5vqv [755.121577ms]
Sep 18 03:06:48.083: INFO: Created: latency-svc-2xth7
Sep 18 03:06:48.098: INFO: Got endpoints: latency-svc-2xth7 [765.842212ms]
Sep 18 03:06:48.153: INFO: Created: latency-svc-6ffsj
Sep 18 03:06:48.158: INFO: Got endpoints: latency-svc-6ffsj [789.40293ms]
Sep 18 03:06:48.180: INFO: Created: latency-svc-j7jnj
Sep 18 03:06:48.195: INFO: Got endpoints: latency-svc-j7jnj [755.829826ms]
Sep 18 03:06:48.216: INFO: Created: latency-svc-8q2jj
Sep 18 03:06:48.231: INFO: Got endpoints: latency-svc-8q2jj [759.241189ms]
Sep 18 03:06:48.301: INFO: Created: latency-svc-4wsb7
Sep 18 03:06:48.304: INFO: Got endpoints: latency-svc-4wsb7 [790.555697ms]
Sep 18 03:06:48.342: INFO: Created: latency-svc-zjpkh
Sep 18 03:06:48.357: INFO: Got endpoints: latency-svc-zjpkh [756.748388ms]
Sep 18 03:06:48.390: INFO: Created: latency-svc-5spxc
Sep 18 03:06:48.450: INFO: Got endpoints: latency-svc-5spxc [810.094177ms]
Sep 18 03:06:48.535: INFO: Created: latency-svc-77hkr
Sep 18 03:06:48.583: INFO: Got endpoints: latency-svc-77hkr [888.256355ms]
Sep 18 03:06:48.611: INFO: Created: latency-svc-88jfw
Sep 18 03:06:48.628: INFO: Got endpoints: latency-svc-88jfw [873.608522ms]
Sep 18 03:06:48.665: INFO: Created: latency-svc-9prv8
Sep 18 03:06:48.738: INFO: Got endpoints: latency-svc-9prv8 [941.022429ms]
Sep 18 03:06:48.740: INFO: Created: latency-svc-txxtm
Sep 18 03:06:48.748: INFO: Got endpoints: latency-svc-txxtm [877.603528ms]
Sep 18 03:06:48.786: INFO: Created: latency-svc-h2h5b
Sep 18 03:06:48.821: INFO: Got endpoints: latency-svc-h2h5b [921.629561ms]
Sep 18 03:06:48.888: INFO: Created: latency-svc-dj22x
Sep 18 03:06:48.893: INFO: Got endpoints: latency-svc-dj22x [890.603424ms]
Sep 18 03:06:48.929: INFO: Created: latency-svc-rhcnf
Sep 18 03:06:48.947: INFO: Got endpoints: latency-svc-rhcnf [921.101051ms]
Sep 18 03:06:48.972: INFO: Created: latency-svc-p8mjq
Sep 18 03:06:49.043: INFO: Got endpoints: latency-svc-p8mjq [981.01188ms]
Sep 18 03:06:49.046: INFO: Created: latency-svc-7jg57
Sep 18 03:06:49.049: INFO: Got endpoints: latency-svc-7jg57 [950.146931ms]
Sep 18 03:06:49.080: INFO: Created: latency-svc-9htll
Sep 18 03:06:49.092: INFO: Got endpoints: latency-svc-9htll [933.219798ms]
Sep 18 03:06:49.128: INFO: Created: latency-svc-2h28b
Sep 18 03:06:49.193: INFO: Got endpoints: latency-svc-2h28b [998.143877ms]
Sep 18 03:06:49.217: INFO: Created: latency-svc-fjftt
Sep 18 03:06:49.230: INFO: Got endpoints: latency-svc-fjftt [999.483502ms]
Sep 18 03:06:49.253: INFO: Created: latency-svc-dmg6k
Sep 18 03:06:49.266: INFO: Got endpoints: latency-svc-dmg6k [961.704748ms]
Sep 18 03:06:49.289: INFO: Created: latency-svc-926kv
Sep 18 03:06:49.361: INFO: Got endpoints: latency-svc-926kv [1.003089412s]
Sep 18 03:06:49.364: INFO: Created: latency-svc-rcpq4
Sep 18 03:06:49.385: INFO: Got endpoints: latency-svc-rcpq4 [934.336131ms]
Sep 18 03:06:49.433: INFO: Created: latency-svc-ptfhb
Sep 18 03:06:49.447: INFO: Got endpoints: latency-svc-ptfhb [864.219734ms]
Sep 18 03:06:49.524: INFO: Created: latency-svc-q78q6
Sep 18 03:06:49.537: INFO: Got endpoints: latency-svc-q78q6 [908.820319ms]
Sep 18 03:06:49.578: INFO: Created: latency-svc-vgct8
Sep 18 03:06:49.592: INFO: Got endpoints: latency-svc-vgct8 [853.761802ms]
Sep 18 03:06:49.613: INFO: Created: latency-svc-6t59q
Sep 18 03:06:49.702: INFO: Got endpoints: latency-svc-6t59q [953.890164ms]
Sep 18 03:06:49.739: INFO: Created: latency-svc-6bnks
Sep 18 03:06:49.766: INFO: Got endpoints: latency-svc-6bnks [944.623458ms]
Sep 18 03:06:49.794: INFO: Created: latency-svc-fjtbp
Sep 18 03:06:49.870: INFO: Got endpoints: latency-svc-fjtbp [976.448904ms]
Sep 18 03:06:49.873: INFO: Created: latency-svc-jmbzr
Sep 18 03:06:49.880: INFO: Got endpoints: latency-svc-jmbzr [933.096519ms]
Sep 18 03:06:49.902: INFO: Created: latency-svc-wscsm
Sep 18 03:06:49.917: INFO: Got endpoints: latency-svc-wscsm [872.725316ms]
Sep 18 03:06:49.952: INFO: Created: latency-svc-6rhwh
Sep 18 03:06:49.965: INFO: Got endpoints: latency-svc-6rhwh [916.327493ms]
Sep 18 03:06:50.015: INFO: Created: latency-svc-bsvm8
Sep 18 03:06:50.020: INFO: Got endpoints: latency-svc-bsvm8 [927.727796ms]
Sep 18 03:06:50.041: INFO: Created: latency-svc-8kxs7
Sep 18 03:06:50.050: INFO: Got endpoints: latency-svc-8kxs7 [856.196728ms]
Sep 18 03:06:50.082: INFO: Created: latency-svc-zgxdf
Sep 18 03:06:50.098: INFO: Got endpoints: latency-svc-zgxdf [867.156454ms]
Sep 18 03:06:50.187: INFO: Created: latency-svc-hhjqm
Sep 18 03:06:50.199: INFO: Got endpoints: latency-svc-hhjqm [932.877898ms]
Sep 18 03:06:50.226: INFO: Created: latency-svc-pb995
Sep 18 03:06:50.244: INFO: Got endpoints: latency-svc-pb995 [883.315915ms]
Sep 18 03:06:50.269: INFO: Created: latency-svc-7kqnr
Sep 18 03:06:50.286: INFO: Got endpoints: latency-svc-7kqnr [900.977157ms]
Sep 18 03:06:50.355: INFO: Created: latency-svc-6g6bc
Sep 18 03:06:50.357: INFO: Got endpoints: latency-svc-6g6bc [909.629641ms]
Sep 18 03:06:50.381: INFO: Created: latency-svc-hjw8t
Sep 18 03:06:50.394: INFO: Got endpoints: latency-svc-hjw8t [857.344736ms]
Sep 18 03:06:50.418: INFO: Created: latency-svc-j55sl
Sep 18 03:06:50.440: INFO: Got endpoints: latency-svc-j55sl [847.898813ms]
Sep 18 03:06:50.511: INFO: Created: latency-svc-nvl6r
Sep 18 03:06:50.513: INFO: Got endpoints: latency-svc-nvl6r [811.025714ms]
Sep 18 03:06:50.550: INFO: Created: latency-svc-6wkzk
Sep 18 03:06:50.579: INFO: Got endpoints: latency-svc-6wkzk [813.151823ms]
Sep 18 03:06:50.610: INFO: Created: latency-svc-qbbfs
Sep 18 03:06:50.691: INFO: Got endpoints: latency-svc-qbbfs [821.188206ms]
Sep 18 03:06:50.693: INFO: Created: latency-svc-cgrvq
Sep 18 03:06:50.702: INFO: Got endpoints: latency-svc-cgrvq [821.010924ms]
Sep 18 03:06:50.735: INFO: Created: latency-svc-26m24
Sep 18 03:06:50.764: INFO: Got endpoints: latency-svc-26m24 [847.227857ms]
Sep 18 03:06:50.784: INFO: Created: latency-svc-rcrmt
Sep 18 03:06:50.851: INFO: Got endpoints: latency-svc-rcrmt [885.840448ms]
Sep 18 03:06:50.880: INFO: Created: latency-svc-rq79b
Sep 18 03:06:50.894: INFO: Got endpoints: latency-svc-rq79b [874.559827ms]
Sep 18 03:06:50.915: INFO: Created: latency-svc-8btml
Sep 18 03:06:50.924: INFO: Got endpoints: latency-svc-8btml [874.369098ms]
Sep 18 03:06:50.951: INFO: Created: latency-svc-gck2h
Sep 18 03:06:51.013: INFO: Got endpoints: latency-svc-gck2h [915.259474ms]
Sep 18 03:06:51.015: INFO: Created: latency-svc-pz6dn
Sep 18 03:06:51.020: INFO: Got endpoints: latency-svc-pz6dn [820.574204ms]
Sep 18 03:06:51.043: INFO: Created: latency-svc-7wxtq
Sep 18 03:06:51.057: INFO: Got endpoints: latency-svc-7wxtq [812.743286ms]
Sep 18 03:06:51.079: INFO: Created: latency-svc-md2zt
Sep 18 03:06:51.094: INFO: Got endpoints: latency-svc-md2zt [807.370232ms]
Sep 18 03:06:51.151: INFO: Created: latency-svc-lrchv
Sep 18 03:06:51.154: INFO: Got endpoints: latency-svc-lrchv [796.309867ms]
Sep 18 03:06:51.205: INFO: Created: latency-svc-sthdr
Sep 18 03:06:51.220: INFO: Got endpoints: latency-svc-sthdr [825.243249ms]
Sep 18 03:06:51.246: INFO: Created: latency-svc-tf2bj
Sep 18 03:06:51.289: INFO: Got endpoints: latency-svc-tf2bj [848.274821ms]
Sep 18 03:06:51.294: INFO: Created: latency-svc-htnmz
Sep 18 03:06:51.310: INFO: Got endpoints: latency-svc-htnmz [797.161118ms]
Sep 18 03:06:51.335: INFO: Created: latency-svc-8ctsb
Sep 18 03:06:51.347: INFO: Got endpoints: latency-svc-8ctsb [766.774938ms]
Sep 18 03:06:51.371: INFO: Created: latency-svc-fgbd6
Sep 18 03:06:51.383: INFO: Got endpoints: latency-svc-fgbd6 [691.631247ms]
Sep 18 03:06:51.439: INFO: Created: latency-svc-vc885
Sep 18 03:06:51.462: INFO: Got endpoints: latency-svc-vc885 [759.840205ms]
Sep 18 03:06:51.462: INFO: Created: latency-svc-qb9dl
Sep 18 03:06:51.486: INFO: Got endpoints: latency-svc-qb9dl [721.283497ms]
Sep 18 03:06:51.534: INFO: Created: latency-svc-dg294
Sep 18 03:06:51.589: INFO: Got endpoints: latency-svc-dg294 [737.022615ms]
Sep 18 03:06:51.635: INFO: Created: latency-svc-tgkxg
Sep 18 03:06:51.786: INFO: Got endpoints: latency-svc-tgkxg [891.680413ms]
Sep 18 03:06:51.788: INFO: Created: latency-svc-r8fwg
Sep 18 03:06:51.798: INFO: Got endpoints: latency-svc-r8fwg [873.462838ms]
Sep 18 03:06:51.833: INFO: Created: latency-svc-njp2d
Sep 18 03:06:51.860: INFO: Got endpoints: latency-svc-njp2d [846.367019ms]
Sep 18 03:06:51.882: INFO: Created: latency-svc-44nsk
Sep 18 03:06:51.942: INFO: Got endpoints: latency-svc-44nsk [921.141291ms]
Sep 18 03:06:51.973: INFO: Created: latency-svc-xm754
Sep 18 03:06:51.985: INFO: Got endpoints: latency-svc-xm754 [927.500223ms]
Sep 18 03:06:52.014: INFO: Created: latency-svc-kd99n
Sep 18 03:06:52.026: INFO: Got endpoints: latency-svc-kd99n [932.711996ms]
Sep 18 03:06:52.074: INFO: Created: latency-svc-jftw5
Sep 18 03:06:52.086: INFO: Got endpoints: latency-svc-jftw5 [932.05658ms]
Sep 18 03:06:52.122: INFO: Created: latency-svc-rkmdc
Sep 18 03:06:52.135: INFO: Got endpoints: latency-svc-rkmdc [915.075132ms]
Sep 18 03:06:52.158: INFO: Created: latency-svc-7x99m
Sep 18 03:06:52.229: INFO: Got endpoints: latency-svc-7x99m [939.964797ms]
Sep 18 03:06:52.238: INFO: Created: latency-svc-wfpkl
Sep 18 03:06:52.259: INFO: Got endpoints: latency-svc-wfpkl [948.24936ms]
Sep 18 03:06:52.290: INFO: Created: latency-svc-h6ms6
Sep 18 03:06:52.298: INFO: Got endpoints: latency-svc-h6ms6 [950.830785ms]
Sep 18 03:06:52.326: INFO: Created: latency-svc-5rq54
Sep 18 03:06:52.385: INFO: Got endpoints: latency-svc-5rq54 [1.001845881s]
Sep 18 03:06:52.387: INFO: Created: latency-svc-xg5wl
Sep 18 03:06:52.400: INFO: Got endpoints: latency-svc-xg5wl [938.090306ms]
Sep 18 03:06:52.427: INFO: Created: latency-svc-zrksb
Sep 18 03:06:52.463: INFO: Got endpoints: latency-svc-zrksb [976.900657ms]
Sep 18 03:06:52.552: INFO: Created: latency-svc-x26w7
Sep 18 03:06:52.603: INFO: Got endpoints: latency-svc-x26w7 [1.013663333s]
Sep 18 03:06:52.607: INFO: Created: latency-svc-h6nqx
Sep 18 03:06:52.623: INFO: Got endpoints: latency-svc-h6nqx [836.110944ms]
Sep 18 03:06:52.650: INFO: Created: latency-svc-8ktw6
Sep 18 03:06:52.720: INFO: Got endpoints: latency-svc-8ktw6 [921.390768ms]
Sep 18 03:06:52.757: INFO: Created: latency-svc-wq2bl
Sep 18 03:06:52.767: INFO: Got endpoints: latency-svc-wq2bl [907.192787ms]
Sep 18 03:06:52.788: INFO: Created: latency-svc-cfw8v
Sep 18 03:06:52.804: INFO: Got endpoints: latency-svc-cfw8v [861.668771ms]
Sep 18 03:06:52.870: INFO: Created: latency-svc-vqbmz
Sep 18 03:06:52.876: INFO: Got endpoints: latency-svc-vqbmz [890.20576ms]
Sep 18 03:06:52.901: INFO: Created: latency-svc-c9ltm
Sep 18 03:06:52.918: INFO: Got endpoints: latency-svc-c9ltm [891.457852ms]
Sep 18 03:06:52.943: INFO: Created: latency-svc-vwwpf
Sep 18 03:06:52.955: INFO: Got endpoints: latency-svc-vwwpf [868.79687ms]
Sep 18 03:06:53.002: INFO: Created: latency-svc-f5ggp
Sep 18 03:06:53.004: INFO: Got endpoints: latency-svc-f5ggp [868.914741ms]
Sep 18 03:06:53.028: INFO: Created: latency-svc-sd22h
Sep 18 03:06:53.045: INFO: Got endpoints: latency-svc-sd22h [816.350081ms]
Sep 18 03:06:53.070: INFO: Created: latency-svc-j67j2
Sep 18 03:06:53.087: INFO: Got endpoints: latency-svc-j67j2 [827.876076ms]
Sep 18 03:06:53.151: INFO: Created: latency-svc-hq8x5
Sep 18 03:06:53.153: INFO: Got endpoints: latency-svc-hq8x5 [855.359153ms]
Sep 18 03:06:53.190: INFO: Created: latency-svc-5fzc5
Sep 18 03:06:53.202: INFO: Got endpoints: latency-svc-5fzc5 [816.278751ms]
Sep 18 03:06:53.231: INFO: Created: latency-svc-tz5z6
Sep 18 03:06:53.244: INFO: Got endpoints: latency-svc-tz5z6 [843.788784ms]
Sep 18 03:06:53.295: INFO: Created: latency-svc-f9gbn
Sep 18 03:06:53.298: INFO: Got endpoints: latency-svc-f9gbn [834.945954ms]
Sep 18 03:06:53.328: INFO: Created: latency-svc-sbf7f
Sep 18 03:06:53.340: INFO: Got endpoints: latency-svc-sbf7f [737.165352ms]
Sep 18 03:06:53.370: INFO: Created: latency-svc-dk7j9
Sep 18 03:06:53.389: INFO: Got endpoints: latency-svc-dk7j9 [765.426128ms]
Sep 18 03:06:53.457: INFO: Created: latency-svc-96t9h
Sep 18 03:06:53.483: INFO: Created: latency-svc-lnbhx
Sep 18 03:06:53.482: INFO: Got endpoints: latency-svc-96t9h [762.540355ms]
Sep 18 03:06:53.496: INFO: Got endpoints: latency-svc-lnbhx [728.538233ms]
Sep 18 03:06:53.497: INFO: Latencies: [81.400554ms 117.919951ms 177.141659ms 213.495067ms 242.623294ms 305.419481ms 345.350925ms 397.535242ms 460.742955ms 503.03377ms 538.460907ms 594.601316ms 640.879738ms 678.124555ms 691.631247ms 705.506109ms 705.686297ms 709.472255ms 710.140672ms 716.229374ms 717.511516ms 719.5443ms 720.549592ms 721.283497ms 723.254697ms 723.31616ms 728.538233ms 736.233515ms 737.022615ms 737.165352ms 740.287192ms 741.143738ms 741.853242ms 744.868889ms 747.7932ms 750.653424ms 751.459251ms 752.45649ms 752.660059ms 753.256079ms 754.014392ms 754.018552ms 755.121577ms 755.829826ms 756.748388ms 757.525153ms 758.701119ms 759.241189ms 759.840205ms 762.540355ms 762.727578ms 762.972487ms 765.119562ms 765.426128ms 765.606246ms 765.842212ms 766.265652ms 766.499602ms 766.774938ms 768.550345ms 770.317621ms 771.084579ms 771.817605ms 777.01415ms 778.386352ms 779.184402ms 783.043798ms 783.588851ms 785.53386ms 786.848277ms 787.195501ms 789.40293ms 789.699862ms 790.555697ms 796.064875ms 796.309867ms 796.50083ms 796.908641ms 797.161118ms 799.125572ms 799.130401ms 801.957472ms 802.149662ms 803.531579ms 807.17025ms 807.370232ms 808.842327ms 810.094177ms 811.025714ms 811.122393ms 811.864109ms 812.664068ms 812.743286ms 813.103357ms 813.151823ms 816.278751ms 816.350081ms 818.664968ms 820.249244ms 820.574204ms 821.010924ms 821.188206ms 823.889446ms 824.145784ms 825.118978ms 825.243249ms 825.274917ms 827.876076ms 832.631834ms 832.813015ms 833.536814ms 834.945954ms 836.110944ms 838.680771ms 842.686832ms 842.909271ms 843.788784ms 846.367019ms 847.227857ms 847.898813ms 848.274821ms 853.260766ms 853.323239ms 853.761802ms 854.249156ms 855.359153ms 856.196728ms 857.344736ms 858.556418ms 861.668771ms 863.534193ms 864.219734ms 865.049277ms 867.156454ms 868.79687ms 868.914741ms 869.784383ms 872.725316ms 873.462838ms 873.608522ms 873.79586ms 874.369098ms 874.559827ms 877.310542ms 877.603528ms 880.785894ms 881.2872ms 883.315915ms 885.840448ms 886.391101ms 888.256355ms 888.496309ms 890.20576ms 890.603424ms 891.457852ms 891.680413ms 893.147286ms 893.582693ms 894.690969ms 900.977157ms 903.008326ms 907.192787ms 908.820319ms 909.517325ms 909.629641ms 915.075132ms 915.259474ms 916.327493ms 921.101051ms 921.141291ms 921.390768ms 921.629561ms 927.500223ms 927.727796ms 932.05658ms 932.711996ms 932.877898ms 933.096519ms 933.219798ms 934.336131ms 938.090306ms 939.964797ms 941.022429ms 942.256688ms 944.623458ms 947.973226ms 948.24936ms 950.146931ms 950.830785ms 953.890164ms 955.782106ms 961.704748ms 976.448904ms 976.900657ms 981.01188ms 998.143877ms 999.483502ms 1.001845881s 1.003089412s 1.013663333s]
Sep 18 03:06:53.500: INFO: 50 %ile: 821.010924ms
Sep 18 03:06:53.500: INFO: 90 %ile: 938.090306ms
Sep 18 03:06:53.500: INFO: 99 %ile: 1.003089412s
Sep 18 03:06:53.500: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:06:53.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7697" for this suite.
Sep 18 03:07:17.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:07:17.671: INFO: namespace svc-latency-7697 deletion completed in 24.16226965s

• [SLOW TEST:39.614 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:07:17.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:07:21.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3417" for this suite.
Sep 18 03:07:59.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:07:59.971: INFO: namespace kubelet-test-3417 deletion completed in 38.167166791s

• [SLOW TEST:42.299 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:07:59.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-b8b18cae-0803-4ccd-b9af-0c563c23bd34
STEP: Creating a pod to test consume secrets
Sep 18 03:08:00.081: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f1f9391f-6d87-435b-984b-89f2ab8cef9a" in namespace "projected-9428" to be "success or failure"
Sep 18 03:08:00.115: INFO: Pod "pod-projected-secrets-f1f9391f-6d87-435b-984b-89f2ab8cef9a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.220761ms
Sep 18 03:08:02.123: INFO: Pod "pod-projected-secrets-f1f9391f-6d87-435b-984b-89f2ab8cef9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041318672s
Sep 18 03:08:04.131: INFO: Pod "pod-projected-secrets-f1f9391f-6d87-435b-984b-89f2ab8cef9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049278136s
STEP: Saw pod success
Sep 18 03:08:04.131: INFO: Pod "pod-projected-secrets-f1f9391f-6d87-435b-984b-89f2ab8cef9a" satisfied condition "success or failure"
Sep 18 03:08:04.135: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-f1f9391f-6d87-435b-984b-89f2ab8cef9a container projected-secret-volume-test: 
STEP: delete the pod
Sep 18 03:08:04.163: INFO: Waiting for pod pod-projected-secrets-f1f9391f-6d87-435b-984b-89f2ab8cef9a to disappear
Sep 18 03:08:04.181: INFO: Pod pod-projected-secrets-f1f9391f-6d87-435b-984b-89f2ab8cef9a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:08:04.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9428" for this suite.
Sep 18 03:08:10.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:08:10.341: INFO: namespace projected-9428 deletion completed in 6.15134158s

• [SLOW TEST:10.368 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:08:10.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep 18 03:08:10.494: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:08:10.502: INFO: Number of nodes with available pods: 0
Sep 18 03:08:10.502: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:08:11.513: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:08:11.520: INFO: Number of nodes with available pods: 0
Sep 18 03:08:11.521: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:08:12.514: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:08:12.520: INFO: Number of nodes with available pods: 0
Sep 18 03:08:12.520: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:08:13.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:08:13.522: INFO: Number of nodes with available pods: 0
Sep 18 03:08:13.522: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:08:14.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:08:14.523: INFO: Number of nodes with available pods: 1
Sep 18 03:08:14.523: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 18 03:08:15.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:08:15.523: INFO: Number of nodes with available pods: 2
Sep 18 03:08:15.523: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Sep 18 03:08:15.592: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:08:15.608: INFO: Number of nodes with available pods: 1
Sep 18 03:08:15.608: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:08:16.619: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:08:16.624: INFO: Number of nodes with available pods: 1
Sep 18 03:08:16.624: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:08:17.620: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:08:17.626: INFO: Number of nodes with available pods: 1
Sep 18 03:08:17.626: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:08:18.621: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:08:18.627: INFO: Number of nodes with available pods: 2
Sep 18 03:08:18.627: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1846, will wait for the garbage collector to delete the pods
Sep 18 03:08:18.698: INFO: Deleting DaemonSet.extensions daemon-set took: 7.589978ms
Sep 18 03:08:18.999: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.9757ms
Sep 18 03:08:24.604: INFO: Number of nodes with available pods: 0
Sep 18 03:08:24.605: INFO: Number of running nodes: 0, number of available pods: 0
Sep 18 03:08:24.609: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1846/daemonsets","resourceVersion":"793706"},"items":null}

Sep 18 03:08:24.613: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1846/pods","resourceVersion":"793706"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:08:24.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1846" for this suite.
Sep 18 03:08:30.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:08:30.827: INFO: namespace daemonsets-1846 deletion completed in 6.182349935s

• [SLOW TEST:20.484 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:08:30.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3098.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3098.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 18 03:08:37.145: INFO: DNS probes using dns-test-66e2e216-27ee-4f22-a363-e8fe1db2d66d succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3098.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3098.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 18 03:08:43.325: INFO: File wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local from pod  dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 18 03:08:43.329: INFO: File jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local from pod  dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 18 03:08:43.329: INFO: Lookups using dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 failed for: [wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local]

Sep 18 03:08:48.336: INFO: File wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local from pod  dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 18 03:08:48.341: INFO: File jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local from pod  dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 18 03:08:48.341: INFO: Lookups using dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 failed for: [wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local]

Sep 18 03:08:53.335: INFO: File wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local from pod  dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 18 03:08:53.340: INFO: File jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local from pod  dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 18 03:08:53.340: INFO: Lookups using dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 failed for: [wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local]

Sep 18 03:08:58.337: INFO: File wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local from pod  dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 18 03:08:58.342: INFO: File jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local from pod  dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 18 03:08:58.343: INFO: Lookups using dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 failed for: [wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local]

Sep 18 03:09:03.338: INFO: File wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local from pod  dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 18 03:09:03.343: INFO: File jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local from pod  dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 18 03:09:03.343: INFO: Lookups using dns-3098/dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 failed for: [wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local]

Sep 18 03:09:08.341: INFO: DNS probes using dns-test-e303bd58-915a-4f70-812a-b779ac1b1005 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3098.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3098.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3098.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3098.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 18 03:09:14.970: INFO: DNS probes using dns-test-f39467dc-fc2f-43ff-ab3d-4dc7f4f6f09c succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:09:15.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3098" for this suite.
Sep 18 03:09:21.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:09:21.229: INFO: namespace dns-3098 deletion completed in 6.161271496s

• [SLOW TEST:50.400 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:09:21.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Sep 18 03:09:21.321: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-watch-closed,UID:0eedab87-c936-4163-8f06-ea8e5232cd2e,ResourceVersion:793962,Generation:0,CreationTimestamp:2020-09-18 03:09:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 18 03:09:21.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-watch-closed,UID:0eedab87-c936-4163-8f06-ea8e5232cd2e,ResourceVersion:793963,Generation:0,CreationTimestamp:2020-09-18 03:09:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Sep 18 03:09:21.340: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-watch-closed,UID:0eedab87-c936-4163-8f06-ea8e5232cd2e,ResourceVersion:793964,Generation:0,CreationTimestamp:2020-09-18 03:09:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 18 03:09:21.341: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-watch-closed,UID:0eedab87-c936-4163-8f06-ea8e5232cd2e,ResourceVersion:793965,Generation:0,CreationTimestamp:2020-09-18 03:09:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:09:21.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4745" for this suite.
Sep 18 03:09:27.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:09:27.509: INFO: namespace watch-4745 deletion completed in 6.158573528s

• [SLOW TEST:6.278 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:09:27.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:09:27.658: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Sep 18 03:09:28.740: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:09:28.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3545" for this suite.
Sep 18 03:09:36.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:09:36.991: INFO: namespace replication-controller-3545 deletion completed in 8.192849084s

• [SLOW TEST:9.480 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:09:36.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:10:03.150: INFO: Container started at 2020-09-18 03:09:39 +0000 UTC, pod became ready at 2020-09-18 03:10:01 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:10:03.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9773" for this suite.
Sep 18 03:10:25.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:10:25.309: INFO: namespace container-probe-9773 deletion completed in 22.15051747s

• [SLOW TEST:48.315 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:10:25.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Sep 18 03:10:41.702: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8756 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:10:41.702: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:10:41.799318       7 log.go:172] (0x9455110) (0x94551f0) Create stream
I0918 03:10:41.799480       7 log.go:172] (0x9455110) (0x94551f0) Stream added, broadcasting: 1
I0918 03:10:41.802742       7 log.go:172] (0x9455110) Reply frame received for 1
I0918 03:10:41.802905       7 log.go:172] (0x9455110) (0x94d40e0) Create stream
I0918 03:10:41.802978       7 log.go:172] (0x9455110) (0x94d40e0) Stream added, broadcasting: 3
I0918 03:10:41.804031       7 log.go:172] (0x9455110) Reply frame received for 3
I0918 03:10:41.804185       7 log.go:172] (0x9455110) (0x94d49a0) Create stream
I0918 03:10:41.804250       7 log.go:172] (0x9455110) (0x94d49a0) Stream added, broadcasting: 5
I0918 03:10:41.805452       7 log.go:172] (0x9455110) Reply frame received for 5
I0918 03:10:41.882315       7 log.go:172] (0x9455110) Data frame received for 3
I0918 03:10:41.882736       7 log.go:172] (0x94d40e0) (3) Data frame handling
I0918 03:10:41.882922       7 log.go:172] (0x94d40e0) (3) Data frame sent
I0918 03:10:41.883059       7 log.go:172] (0x9455110) Data frame received for 3
I0918 03:10:41.883204       7 log.go:172] (0x9455110) Data frame received for 5
I0918 03:10:41.883440       7 log.go:172] (0x94d49a0) (5) Data frame handling
I0918 03:10:41.883586       7 log.go:172] (0x94d40e0) (3) Data frame handling
I0918 03:10:41.883847       7 log.go:172] (0x9455110) Data frame received for 1
I0918 03:10:41.883975       7 log.go:172] (0x94551f0) (1) Data frame handling
I0918 03:10:41.884097       7 log.go:172] (0x94551f0) (1) Data frame sent
I0918 03:10:41.884276       7 log.go:172] (0x9455110) (0x94551f0) Stream removed, broadcasting: 1
I0918 03:10:41.884413       7 log.go:172] (0x9455110) Go away received
I0918 03:10:41.884843       7 log.go:172] (0x9455110) (0x94551f0) Stream removed, broadcasting: 1
I0918 03:10:41.885021       7 log.go:172] (0x9455110) (0x94d40e0) Stream removed, broadcasting: 3
I0918 03:10:41.885149       7 log.go:172] (0x9455110) (0x94d49a0) Stream removed, broadcasting: 5
Sep 18 03:10:41.885: INFO: Exec stderr: ""
Sep 18 03:10:41.885: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8756 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:10:41.885: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:10:41.978995       7 log.go:172] (0x95b0e70) (0x95b0f50) Create stream
I0918 03:10:41.979144       7 log.go:172] (0x95b0e70) (0x95b0f50) Stream added, broadcasting: 1
I0918 03:10:41.985221       7 log.go:172] (0x95b0e70) Reply frame received for 1
I0918 03:10:41.985455       7 log.go:172] (0x95b0e70) (0x94552d0) Create stream
I0918 03:10:41.985576       7 log.go:172] (0x95b0e70) (0x94552d0) Stream added, broadcasting: 3
I0918 03:10:41.989089       7 log.go:172] (0x95b0e70) Reply frame received for 3
I0918 03:10:41.989292       7 log.go:172] (0x95b0e70) (0x95b10a0) Create stream
I0918 03:10:41.989443       7 log.go:172] (0x95b0e70) (0x95b10a0) Stream added, broadcasting: 5
I0918 03:10:41.991096       7 log.go:172] (0x95b0e70) Reply frame received for 5
I0918 03:10:42.048580       7 log.go:172] (0x95b0e70) Data frame received for 5
I0918 03:10:42.048733       7 log.go:172] (0x95b10a0) (5) Data frame handling
I0918 03:10:42.048839       7 log.go:172] (0x95b0e70) Data frame received for 3
I0918 03:10:42.048955       7 log.go:172] (0x94552d0) (3) Data frame handling
I0918 03:10:42.049085       7 log.go:172] (0x94552d0) (3) Data frame sent
I0918 03:10:42.049199       7 log.go:172] (0x95b0e70) Data frame received for 3
I0918 03:10:42.049319       7 log.go:172] (0x94552d0) (3) Data frame handling
I0918 03:10:42.049986       7 log.go:172] (0x95b0e70) Data frame received for 1
I0918 03:10:42.050109       7 log.go:172] (0x95b0f50) (1) Data frame handling
I0918 03:10:42.050215       7 log.go:172] (0x95b0f50) (1) Data frame sent
I0918 03:10:42.050355       7 log.go:172] (0x95b0e70) (0x95b0f50) Stream removed, broadcasting: 1
I0918 03:10:42.050536       7 log.go:172] (0x95b0e70) Go away received
I0918 03:10:42.050986       7 log.go:172] (0x95b0e70) (0x95b0f50) Stream removed, broadcasting: 1
I0918 03:10:42.051141       7 log.go:172] (0x95b0e70) (0x94552d0) Stream removed, broadcasting: 3
I0918 03:10:42.051236       7 log.go:172] (0x95b0e70) (0x95b10a0) Stream removed, broadcasting: 5
Sep 18 03:10:42.051: INFO: Exec stderr: ""
Sep 18 03:10:42.051: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8756 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:10:42.051: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:10:42.147426       7 log.go:172] (0x77ef1f0) (0x77ef2d0) Create stream
I0918 03:10:42.147573       7 log.go:172] (0x77ef1f0) (0x77ef2d0) Stream added, broadcasting: 1
I0918 03:10:42.151573       7 log.go:172] (0x77ef1f0) Reply frame received for 1
I0918 03:10:42.151766       7 log.go:172] (0x77ef1f0) (0x94553b0) Create stream
I0918 03:10:42.151883       7 log.go:172] (0x77ef1f0) (0x94553b0) Stream added, broadcasting: 3
I0918 03:10:42.153538       7 log.go:172] (0x77ef1f0) Reply frame received for 3
I0918 03:10:42.153777       7 log.go:172] (0x77ef1f0) (0x77ef3b0) Create stream
I0918 03:10:42.153902       7 log.go:172] (0x77ef1f0) (0x77ef3b0) Stream added, broadcasting: 5
I0918 03:10:42.155348       7 log.go:172] (0x77ef1f0) Reply frame received for 5
I0918 03:10:42.222263       7 log.go:172] (0x77ef1f0) Data frame received for 3
I0918 03:10:42.222447       7 log.go:172] (0x94553b0) (3) Data frame handling
I0918 03:10:42.222546       7 log.go:172] (0x94553b0) (3) Data frame sent
I0918 03:10:42.222636       7 log.go:172] (0x77ef1f0) Data frame received for 3
I0918 03:10:42.222711       7 log.go:172] (0x94553b0) (3) Data frame handling
I0918 03:10:42.222850       7 log.go:172] (0x77ef1f0) Data frame received for 5
I0918 03:10:42.223057       7 log.go:172] (0x77ef3b0) (5) Data frame handling
I0918 03:10:42.223838       7 log.go:172] (0x77ef1f0) Data frame received for 1
I0918 03:10:42.224118       7 log.go:172] (0x77ef2d0) (1) Data frame handling
I0918 03:10:42.224419       7 log.go:172] (0x77ef2d0) (1) Data frame sent
I0918 03:10:42.224645       7 log.go:172] (0x77ef1f0) (0x77ef2d0) Stream removed, broadcasting: 1
I0918 03:10:42.224851       7 log.go:172] (0x77ef1f0) Go away received
I0918 03:10:42.225276       7 log.go:172] (0x77ef1f0) (0x77ef2d0) Stream removed, broadcasting: 1
I0918 03:10:42.225405       7 log.go:172] (0x77ef1f0) (0x94553b0) Stream removed, broadcasting: 3
I0918 03:10:42.225535       7 log.go:172] (0x77ef1f0) (0x77ef3b0) Stream removed, broadcasting: 5
Sep 18 03:10:42.225: INFO: Exec stderr: ""
Sep 18 03:10:42.225: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8756 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:10:42.225: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:10:42.345962       7 log.go:172] (0x95b1810) (0x95b18f0) Create stream
I0918 03:10:42.346171       7 log.go:172] (0x95b1810) (0x95b18f0) Stream added, broadcasting: 1
I0918 03:10:42.351278       7 log.go:172] (0x95b1810) Reply frame received for 1
I0918 03:10:42.351481       7 log.go:172] (0x95b1810) (0x95b19d0) Create stream
I0918 03:10:42.351614       7 log.go:172] (0x95b1810) (0x95b19d0) Stream added, broadcasting: 3
I0918 03:10:42.353615       7 log.go:172] (0x95b1810) Reply frame received for 3
I0918 03:10:42.353877       7 log.go:172] (0x95b1810) (0x94d4bd0) Create stream
I0918 03:10:42.354008       7 log.go:172] (0x95b1810) (0x94d4bd0) Stream added, broadcasting: 5
I0918 03:10:42.355831       7 log.go:172] (0x95b1810) Reply frame received for 5
I0918 03:10:42.418590       7 log.go:172] (0x95b1810) Data frame received for 5
I0918 03:10:42.418742       7 log.go:172] (0x94d4bd0) (5) Data frame handling
I0918 03:10:42.418847       7 log.go:172] (0x95b1810) Data frame received for 3
I0918 03:10:42.418970       7 log.go:172] (0x95b19d0) (3) Data frame handling
I0918 03:10:42.419083       7 log.go:172] (0x95b19d0) (3) Data frame sent
I0918 03:10:42.419170       7 log.go:172] (0x95b1810) Data frame received for 3
I0918 03:10:42.419248       7 log.go:172] (0x95b19d0) (3) Data frame handling
I0918 03:10:42.419938       7 log.go:172] (0x95b1810) Data frame received for 1
I0918 03:10:42.420081       7 log.go:172] (0x95b18f0) (1) Data frame handling
I0918 03:10:42.420276       7 log.go:172] (0x95b18f0) (1) Data frame sent
I0918 03:10:42.420414       7 log.go:172] (0x95b1810) (0x95b18f0) Stream removed, broadcasting: 1
I0918 03:10:42.420573       7 log.go:172] (0x95b1810) Go away received
I0918 03:10:42.420980       7 log.go:172] (0x95b1810) (0x95b18f0) Stream removed, broadcasting: 1
I0918 03:10:42.421126       7 log.go:172] (0x95b1810) (0x95b19d0) Stream removed, broadcasting: 3
I0918 03:10:42.421247       7 log.go:172] (0x95b1810) (0x94d4bd0) Stream removed, broadcasting: 5
Sep 18 03:10:42.421: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Sep 18 03:10:42.421: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8756 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:10:42.421: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:10:42.521283       7 log.go:172] (0x94d5730) (0x94d57a0) Create stream
I0918 03:10:42.521401       7 log.go:172] (0x94d5730) (0x94d57a0) Stream added, broadcasting: 1
I0918 03:10:42.525706       7 log.go:172] (0x94d5730) Reply frame received for 1
I0918 03:10:42.525996       7 log.go:172] (0x94d5730) (0x77ef500) Create stream
I0918 03:10:42.526163       7 log.go:172] (0x94d5730) (0x77ef500) Stream added, broadcasting: 3
I0918 03:10:42.528452       7 log.go:172] (0x94d5730) Reply frame received for 3
I0918 03:10:42.528645       7 log.go:172] (0x94d5730) (0x77ef5e0) Create stream
I0918 03:10:42.528738       7 log.go:172] (0x94d5730) (0x77ef5e0) Stream added, broadcasting: 5
I0918 03:10:42.530168       7 log.go:172] (0x94d5730) Reply frame received for 5
I0918 03:10:42.579321       7 log.go:172] (0x94d5730) Data frame received for 3
I0918 03:10:42.579531       7 log.go:172] (0x77ef500) (3) Data frame handling
I0918 03:10:42.579658       7 log.go:172] (0x94d5730) Data frame received for 5
I0918 03:10:42.579872       7 log.go:172] (0x77ef5e0) (5) Data frame handling
I0918 03:10:42.580101       7 log.go:172] (0x77ef500) (3) Data frame sent
I0918 03:10:42.580285       7 log.go:172] (0x94d5730) Data frame received for 3
I0918 03:10:42.580361       7 log.go:172] (0x77ef500) (3) Data frame handling
I0918 03:10:42.580604       7 log.go:172] (0x94d5730) Data frame received for 1
I0918 03:10:42.580692       7 log.go:172] (0x94d57a0) (1) Data frame handling
I0918 03:10:42.580797       7 log.go:172] (0x94d57a0) (1) Data frame sent
I0918 03:10:42.580887       7 log.go:172] (0x94d5730) (0x94d57a0) Stream removed, broadcasting: 1
I0918 03:10:42.580981       7 log.go:172] (0x94d5730) Go away received
I0918 03:10:42.581289       7 log.go:172] (0x94d5730) (0x94d57a0) Stream removed, broadcasting: 1
I0918 03:10:42.581384       7 log.go:172] (0x94d5730) (0x77ef500) Stream removed, broadcasting: 3
I0918 03:10:42.581455       7 log.go:172] (0x94d5730) (0x77ef5e0) Stream removed, broadcasting: 5
Sep 18 03:10:42.581: INFO: Exec stderr: ""
Sep 18 03:10:42.581: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8756 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:10:42.581: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:10:42.673441       7 log.go:172] (0x9404930) (0x9404a10) Create stream
I0918 03:10:42.673605       7 log.go:172] (0x9404930) (0x9404a10) Stream added, broadcasting: 1
I0918 03:10:42.677839       7 log.go:172] (0x9404930) Reply frame received for 1
I0918 03:10:42.677984       7 log.go:172] (0x9404930) (0x9404af0) Create stream
I0918 03:10:42.678062       7 log.go:172] (0x9404930) (0x9404af0) Stream added, broadcasting: 3
I0918 03:10:42.679450       7 log.go:172] (0x9404930) Reply frame received for 3
I0918 03:10:42.679579       7 log.go:172] (0x9404930) (0x9404bd0) Create stream
I0918 03:10:42.679655       7 log.go:172] (0x9404930) (0x9404bd0) Stream added, broadcasting: 5
I0918 03:10:42.680982       7 log.go:172] (0x9404930) Reply frame received for 5
I0918 03:10:42.743936       7 log.go:172] (0x9404930) Data frame received for 3
I0918 03:10:42.744102       7 log.go:172] (0x9404af0) (3) Data frame handling
I0918 03:10:42.744276       7 log.go:172] (0x9404af0) (3) Data frame sent
I0918 03:10:42.744375       7 log.go:172] (0x9404930) Data frame received for 3
I0918 03:10:42.744483       7 log.go:172] (0x9404930) Data frame received for 5
I0918 03:10:42.744674       7 log.go:172] (0x9404bd0) (5) Data frame handling
I0918 03:10:42.744892       7 log.go:172] (0x9404af0) (3) Data frame handling
I0918 03:10:42.745380       7 log.go:172] (0x9404930) Data frame received for 1
I0918 03:10:42.745551       7 log.go:172] (0x9404a10) (1) Data frame handling
I0918 03:10:42.745773       7 log.go:172] (0x9404a10) (1) Data frame sent
I0918 03:10:42.745925       7 log.go:172] (0x9404930) (0x9404a10) Stream removed, broadcasting: 1
I0918 03:10:42.746101       7 log.go:172] (0x9404930) Go away received
I0918 03:10:42.746504       7 log.go:172] (0x9404930) (0x9404a10) Stream removed, broadcasting: 1
I0918 03:10:42.746702       7 log.go:172] (0x9404930) (0x9404af0) Stream removed, broadcasting: 3
I0918 03:10:42.746904       7 log.go:172] (0x9404930) (0x9404bd0) Stream removed, broadcasting: 5
Sep 18 03:10:42.747: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Sep 18 03:10:42.747: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8756 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:10:42.747: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:10:42.850450       7 log.go:172] (0x898a2a0) (0x898a310) Create stream
I0918 03:10:42.850644       7 log.go:172] (0x898a2a0) (0x898a310) Stream added, broadcasting: 1
I0918 03:10:42.856265       7 log.go:172] (0x898a2a0) Reply frame received for 1
I0918 03:10:42.856489       7 log.go:172] (0x898a2a0) (0x94d5810) Create stream
I0918 03:10:42.856614       7 log.go:172] (0x898a2a0) (0x94d5810) Stream added, broadcasting: 3
I0918 03:10:42.858355       7 log.go:172] (0x898a2a0) Reply frame received for 3
I0918 03:10:42.858546       7 log.go:172] (0x898a2a0) (0x898a380) Create stream
I0918 03:10:42.858658       7 log.go:172] (0x898a2a0) (0x898a380) Stream added, broadcasting: 5
I0918 03:10:42.860383       7 log.go:172] (0x898a2a0) Reply frame received for 5
I0918 03:10:42.932095       7 log.go:172] (0x898a2a0) Data frame received for 3
I0918 03:10:42.932428       7 log.go:172] (0x898a2a0) Data frame received for 5
I0918 03:10:42.932690       7 log.go:172] (0x898a380) (5) Data frame handling
I0918 03:10:42.932927       7 log.go:172] (0x94d5810) (3) Data frame handling
I0918 03:10:42.933152       7 log.go:172] (0x94d5810) (3) Data frame sent
I0918 03:10:42.933418       7 log.go:172] (0x898a2a0) Data frame received for 3
I0918 03:10:42.933608       7 log.go:172] (0x94d5810) (3) Data frame handling
I0918 03:10:42.933795       7 log.go:172] (0x898a2a0) Data frame received for 1
I0918 03:10:42.933898       7 log.go:172] (0x898a310) (1) Data frame handling
I0918 03:10:42.933994       7 log.go:172] (0x898a310) (1) Data frame sent
I0918 03:10:42.934105       7 log.go:172] (0x898a2a0) (0x898a310) Stream removed, broadcasting: 1
I0918 03:10:42.934231       7 log.go:172] (0x898a2a0) Go away received
I0918 03:10:42.934584       7 log.go:172] (0x898a2a0) (0x898a310) Stream removed, broadcasting: 1
I0918 03:10:42.934711       7 log.go:172] (0x898a2a0) (0x94d5810) Stream removed, broadcasting: 3
I0918 03:10:42.934824       7 log.go:172] (0x898a2a0) (0x898a380) Stream removed, broadcasting: 5
Sep 18 03:10:42.934: INFO: Exec stderr: ""
Sep 18 03:10:42.934: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8756 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:10:42.935: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:10:43.025955       7 log.go:172] (0x8e48000) (0x8e48070) Create stream
I0918 03:10:43.026129       7 log.go:172] (0x8e48000) (0x8e48070) Stream added, broadcasting: 1
I0918 03:10:43.029830       7 log.go:172] (0x8e48000) Reply frame received for 1
I0918 03:10:43.030003       7 log.go:172] (0x8e48000) (0x8e480e0) Create stream
I0918 03:10:43.030098       7 log.go:172] (0x8e48000) (0x8e480e0) Stream added, broadcasting: 3
I0918 03:10:43.031492       7 log.go:172] (0x8e48000) Reply frame received for 3
I0918 03:10:43.031670       7 log.go:172] (0x8e48000) (0x8e48150) Create stream
I0918 03:10:43.031748       7 log.go:172] (0x8e48000) (0x8e48150) Stream added, broadcasting: 5
I0918 03:10:43.033072       7 log.go:172] (0x8e48000) Reply frame received for 5
I0918 03:10:43.098719       7 log.go:172] (0x8e48000) Data frame received for 3
I0918 03:10:43.099001       7 log.go:172] (0x8e480e0) (3) Data frame handling
I0918 03:10:43.099201       7 log.go:172] (0x8e48000) Data frame received for 5
I0918 03:10:43.099415       7 log.go:172] (0x8e48150) (5) Data frame handling
I0918 03:10:43.099591       7 log.go:172] (0x8e480e0) (3) Data frame sent
I0918 03:10:43.099744       7 log.go:172] (0x8e48000) Data frame received for 3
I0918 03:10:43.099868       7 log.go:172] (0x8e48000) Data frame received for 1
I0918 03:10:43.100032       7 log.go:172] (0x8e48070) (1) Data frame handling
I0918 03:10:43.100278       7 log.go:172] (0x8e480e0) (3) Data frame handling
I0918 03:10:43.100510       7 log.go:172] (0x8e48070) (1) Data frame sent
I0918 03:10:43.100730       7 log.go:172] (0x8e48000) (0x8e48070) Stream removed, broadcasting: 1
I0918 03:10:43.100973       7 log.go:172] (0x8e48000) Go away received
I0918 03:10:43.101396       7 log.go:172] (0x8e48000) (0x8e48070) Stream removed, broadcasting: 1
I0918 03:10:43.101534       7 log.go:172] (0x8e48000) (0x8e480e0) Stream removed, broadcasting: 3
I0918 03:10:43.101699       7 log.go:172] (0x8e48000) (0x8e48150) Stream removed, broadcasting: 5
Sep 18 03:10:43.101: INFO: Exec stderr: ""
Sep 18 03:10:43.102: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8756 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:10:43.102: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:10:43.196237       7 log.go:172] (0x94051f0) (0x94052d0) Create stream
I0918 03:10:43.196469       7 log.go:172] (0x94051f0) (0x94052d0) Stream added, broadcasting: 1
I0918 03:10:43.202680       7 log.go:172] (0x94051f0) Reply frame received for 1
I0918 03:10:43.202863       7 log.go:172] (0x94051f0) (0x94d5880) Create stream
I0918 03:10:43.202955       7 log.go:172] (0x94051f0) (0x94d5880) Stream added, broadcasting: 3
I0918 03:10:43.205173       7 log.go:172] (0x94051f0) Reply frame received for 3
I0918 03:10:43.205313       7 log.go:172] (0x94051f0) (0x9455490) Create stream
I0918 03:10:43.205386       7 log.go:172] (0x94051f0) (0x9455490) Stream added, broadcasting: 5
I0918 03:10:43.206489       7 log.go:172] (0x94051f0) Reply frame received for 5
I0918 03:10:43.263303       7 log.go:172] (0x94051f0) Data frame received for 3
I0918 03:10:43.263510       7 log.go:172] (0x94d5880) (3) Data frame handling
I0918 03:10:43.263675       7 log.go:172] (0x94051f0) Data frame received for 5
I0918 03:10:43.263858       7 log.go:172] (0x9455490) (5) Data frame handling
I0918 03:10:43.264012       7 log.go:172] (0x94d5880) (3) Data frame sent
I0918 03:10:43.264128       7 log.go:172] (0x94051f0) Data frame received for 3
I0918 03:10:43.264323       7 log.go:172] (0x94d5880) (3) Data frame handling
I0918 03:10:43.265164       7 log.go:172] (0x94051f0) Data frame received for 1
I0918 03:10:43.265384       7 log.go:172] (0x94052d0) (1) Data frame handling
I0918 03:10:43.265610       7 log.go:172] (0x94052d0) (1) Data frame sent
I0918 03:10:43.265787       7 log.go:172] (0x94051f0) (0x94052d0) Stream removed, broadcasting: 1
I0918 03:10:43.265999       7 log.go:172] (0x94051f0) Go away received
I0918 03:10:43.266583       7 log.go:172] (0x94051f0) (0x94052d0) Stream removed, broadcasting: 1
I0918 03:10:43.266790       7 log.go:172] (0x94051f0) (0x94d5880) Stream removed, broadcasting: 3
I0918 03:10:43.266944       7 log.go:172] (0x94051f0) (0x9455490) Stream removed, broadcasting: 5
Sep 18 03:10:43.267: INFO: Exec stderr: ""
Sep 18 03:10:43.267: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8756 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:10:43.267: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:10:43.363631       7 log.go:172] (0x94058f0) (0x94059d0) Create stream
I0918 03:10:43.363793       7 log.go:172] (0x94058f0) (0x94059d0) Stream added, broadcasting: 1
I0918 03:10:43.368695       7 log.go:172] (0x94058f0) Reply frame received for 1
I0918 03:10:43.368963       7 log.go:172] (0x94058f0) (0x8e481c0) Create stream
I0918 03:10:43.369089       7 log.go:172] (0x94058f0) (0x8e481c0) Stream added, broadcasting: 3
I0918 03:10:43.370859       7 log.go:172] (0x94058f0) Reply frame received for 3
I0918 03:10:43.371065       7 log.go:172] (0x94058f0) (0x8e48230) Create stream
I0918 03:10:43.371172       7 log.go:172] (0x94058f0) (0x8e48230) Stream added, broadcasting: 5
I0918 03:10:43.372976       7 log.go:172] (0x94058f0) Reply frame received for 5
I0918 03:10:43.430305       7 log.go:172] (0x94058f0) Data frame received for 3
I0918 03:10:43.430467       7 log.go:172] (0x8e481c0) (3) Data frame handling
I0918 03:10:43.430624       7 log.go:172] (0x8e481c0) (3) Data frame sent
I0918 03:10:43.430795       7 log.go:172] (0x94058f0) Data frame received for 5
I0918 03:10:43.430977       7 log.go:172] (0x8e48230) (5) Data frame handling
I0918 03:10:43.431206       7 log.go:172] (0x94058f0) Data frame received for 3
I0918 03:10:43.431367       7 log.go:172] (0x8e481c0) (3) Data frame handling
I0918 03:10:43.431970       7 log.go:172] (0x94058f0) Data frame received for 1
I0918 03:10:43.432249       7 log.go:172] (0x94059d0) (1) Data frame handling
I0918 03:10:43.432440       7 log.go:172] (0x94059d0) (1) Data frame sent
I0918 03:10:43.432630       7 log.go:172] (0x94058f0) (0x94059d0) Stream removed, broadcasting: 1
I0918 03:10:43.432835       7 log.go:172] (0x94058f0) Go away received
I0918 03:10:43.433090       7 log.go:172] (0x94058f0) (0x94059d0) Stream removed, broadcasting: 1
I0918 03:10:43.433288       7 log.go:172] (0x94058f0) (0x8e481c0) Stream removed, broadcasting: 3
I0918 03:10:43.433404       7 log.go:172] (0x94058f0) (0x8e48230) Stream removed, broadcasting: 5
Sep 18 03:10:43.433: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:10:43.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8756" for this suite.
Sep 18 03:11:27.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:11:27.589: INFO: namespace e2e-kubelet-etc-hosts-8756 deletion completed in 44.147116317s

• [SLOW TEST:62.279 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:11:27.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Sep 18 03:11:27.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4198'
Sep 18 03:11:32.254: INFO: stderr: ""
Sep 18 03:11:32.254: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 18 03:11:32.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4198'
Sep 18 03:11:33.402: INFO: stderr: ""
Sep 18 03:11:33.402: INFO: stdout: "update-demo-nautilus-prpdf update-demo-nautilus-z68bz "
Sep 18 03:11:33.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prpdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:11:34.515: INFO: stderr: ""
Sep 18 03:11:34.515: INFO: stdout: ""
Sep 18 03:11:34.515: INFO: update-demo-nautilus-prpdf is created but not running
Sep 18 03:11:39.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4198'
Sep 18 03:11:40.697: INFO: stderr: ""
Sep 18 03:11:40.697: INFO: stdout: "update-demo-nautilus-prpdf update-demo-nautilus-z68bz "
Sep 18 03:11:40.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prpdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:11:41.838: INFO: stderr: ""
Sep 18 03:11:41.838: INFO: stdout: "true"
Sep 18 03:11:41.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prpdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:11:42.943: INFO: stderr: ""
Sep 18 03:11:42.943: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 18 03:11:42.944: INFO: validating pod update-demo-nautilus-prpdf
Sep 18 03:11:42.952: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 18 03:11:42.952: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 18 03:11:42.952: INFO: update-demo-nautilus-prpdf is verified up and running
Sep 18 03:11:42.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z68bz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:11:44.085: INFO: stderr: ""
Sep 18 03:11:44.085: INFO: stdout: "true"
Sep 18 03:11:44.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z68bz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:11:45.230: INFO: stderr: ""
Sep 18 03:11:45.230: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 18 03:11:45.230: INFO: validating pod update-demo-nautilus-z68bz
Sep 18 03:11:45.237: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 18 03:11:45.237: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 18 03:11:45.237: INFO: update-demo-nautilus-z68bz is verified up and running
STEP: scaling down the replication controller
Sep 18 03:11:45.248: INFO: scanned /root for discovery docs: 
Sep 18 03:11:45.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4198'
Sep 18 03:11:46.467: INFO: stderr: ""
Sep 18 03:11:46.467: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 18 03:11:46.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4198'
Sep 18 03:11:47.595: INFO: stderr: ""
Sep 18 03:11:47.595: INFO: stdout: "update-demo-nautilus-prpdf update-demo-nautilus-z68bz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Sep 18 03:11:52.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4198'
Sep 18 03:11:53.740: INFO: stderr: ""
Sep 18 03:11:53.740: INFO: stdout: "update-demo-nautilus-prpdf "
Sep 18 03:11:53.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prpdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:11:54.890: INFO: stderr: ""
Sep 18 03:11:54.890: INFO: stdout: "true"
Sep 18 03:11:54.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prpdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:11:56.025: INFO: stderr: ""
Sep 18 03:11:56.025: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 18 03:11:56.025: INFO: validating pod update-demo-nautilus-prpdf
Sep 18 03:11:56.030: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 18 03:11:56.031: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 18 03:11:56.031: INFO: update-demo-nautilus-prpdf is verified up and running
STEP: scaling up the replication controller
Sep 18 03:11:56.037: INFO: scanned /root for discovery docs: 
Sep 18 03:11:56.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4198'
Sep 18 03:11:57.258: INFO: stderr: ""
Sep 18 03:11:57.259: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 18 03:11:57.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4198'
Sep 18 03:11:58.398: INFO: stderr: ""
Sep 18 03:11:58.398: INFO: stdout: "update-demo-nautilus-kh8k4 update-demo-nautilus-prpdf "
Sep 18 03:11:58.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kh8k4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:11:59.561: INFO: stderr: ""
Sep 18 03:11:59.562: INFO: stdout: ""
Sep 18 03:11:59.562: INFO: update-demo-nautilus-kh8k4 is created but not running
Sep 18 03:12:04.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4198'
Sep 18 03:12:05.714: INFO: stderr: ""
Sep 18 03:12:05.714: INFO: stdout: "update-demo-nautilus-kh8k4 update-demo-nautilus-prpdf "
Sep 18 03:12:05.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kh8k4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:12:06.839: INFO: stderr: ""
Sep 18 03:12:06.839: INFO: stdout: "true"
Sep 18 03:12:06.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kh8k4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:12:07.962: INFO: stderr: ""
Sep 18 03:12:07.962: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 18 03:12:07.962: INFO: validating pod update-demo-nautilus-kh8k4
Sep 18 03:12:07.969: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 18 03:12:07.969: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 18 03:12:07.969: INFO: update-demo-nautilus-kh8k4 is verified up and running
Sep 18 03:12:07.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prpdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:12:09.077: INFO: stderr: ""
Sep 18 03:12:09.078: INFO: stdout: "true"
Sep 18 03:12:09.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prpdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4198'
Sep 18 03:12:10.207: INFO: stderr: ""
Sep 18 03:12:10.208: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 18 03:12:10.208: INFO: validating pod update-demo-nautilus-prpdf
Sep 18 03:12:10.213: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 18 03:12:10.214: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 18 03:12:10.214: INFO: update-demo-nautilus-prpdf is verified up and running
STEP: using delete to clean up resources
Sep 18 03:12:10.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4198'
Sep 18 03:12:11.326: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 18 03:12:11.327: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Sep 18 03:12:11.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4198'
Sep 18 03:12:12.660: INFO: stderr: "No resources found.\n"
Sep 18 03:12:12.661: INFO: stdout: ""
Sep 18 03:12:12.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4198 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep 18 03:12:13.792: INFO: stderr: ""
Sep 18 03:12:13.792: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:12:13.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4198" for this suite.
Sep 18 03:12:35.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:12:35.964: INFO: namespace kubectl-4198 deletion completed in 22.160951718s

• [SLOW TEST:68.370 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:12:35.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-b8cf5f99-b056-466a-948e-be15fd739d46
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:12:36.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1861" for this suite.
Sep 18 03:12:42.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:12:42.281: INFO: namespace secrets-1861 deletion completed in 6.195452621s

• [SLOW TEST:6.316 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:12:42.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:12:42.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6387a57d-e367-494a-843d-6efbb7c9ebc0" in namespace "projected-7254" to be "success or failure"
Sep 18 03:12:42.432: INFO: Pod "downwardapi-volume-6387a57d-e367-494a-843d-6efbb7c9ebc0": Phase="Pending", Reason="", readiness=false. Elapsed: 70.03605ms
Sep 18 03:12:44.441: INFO: Pod "downwardapi-volume-6387a57d-e367-494a-843d-6efbb7c9ebc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078664127s
Sep 18 03:12:46.449: INFO: Pod "downwardapi-volume-6387a57d-e367-494a-843d-6efbb7c9ebc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086787358s
STEP: Saw pod success
Sep 18 03:12:46.449: INFO: Pod "downwardapi-volume-6387a57d-e367-494a-843d-6efbb7c9ebc0" satisfied condition "success or failure"
Sep 18 03:12:46.455: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6387a57d-e367-494a-843d-6efbb7c9ebc0 container client-container: 
STEP: delete the pod
Sep 18 03:12:46.475: INFO: Waiting for pod downwardapi-volume-6387a57d-e367-494a-843d-6efbb7c9ebc0 to disappear
Sep 18 03:12:46.479: INFO: Pod downwardapi-volume-6387a57d-e367-494a-843d-6efbb7c9ebc0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:12:46.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7254" for this suite.
Sep 18 03:12:52.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:12:52.713: INFO: namespace projected-7254 deletion completed in 6.226385204s

• [SLOW TEST:10.431 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:12:52.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:12:52.840: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4954081c-662e-4699-a15d-789aeec0fad1", Controller:(*bool)(0x95dc2d6), BlockOwnerDeletion:(*bool)(0x95dc2d7)}}
Sep 18 03:12:52.905: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"10e85883-d746-4490-9fbc-4b41fda12e6a", Controller:(*bool)(0x93cffd6), BlockOwnerDeletion:(*bool)(0x93cffd7)}}
Sep 18 03:12:53.073: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"59d38858-b273-4446-adda-4f258d44564a", Controller:(*bool)(0x95dc58a), BlockOwnerDeletion:(*bool)(0x95dc58b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:12:58.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-651" for this suite.
Sep 18 03:13:04.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:13:04.265: INFO: namespace gc-651 deletion completed in 6.168303055s

• [SLOW TEST:11.551 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:13:04.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7389.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7389.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7389.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7389.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 18 03:13:10.474: INFO: DNS probes using dns-7389/dns-test-f6356f3a-6761-4af8-aeb0-cecfabeeb3d5 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:13:10.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7389" for this suite.
Sep 18 03:13:16.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:13:16.714: INFO: namespace dns-7389 deletion completed in 6.196952515s

• [SLOW TEST:12.448 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:13:16.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Sep 18 03:13:20.843: INFO: Pod pod-hostip-2177e2f5-73c1-4fbb-8896-85815b4d05b1 has hostIP: 172.18.0.6
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:13:20.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-475" for this suite.
Sep 18 03:13:42.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:13:43.021: INFO: namespace pods-475 deletion completed in 22.170866196s

• [SLOW TEST:26.306 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:13:43.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Sep 18 03:13:47.663: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5052 pod-service-account-a62f022f-8702-47cd-abcf-4741f7becce4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Sep 18 03:13:49.020: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5052 pod-service-account-a62f022f-8702-47cd-abcf-4741f7becce4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Sep 18 03:13:50.349: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5052 pod-service-account-a62f022f-8702-47cd-abcf-4741f7becce4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:13:51.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5052" for this suite.
Sep 18 03:13:57.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:13:57.980: INFO: namespace svcaccounts-5052 deletion completed in 6.185026378s

• [SLOW TEST:14.958 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:13:57.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:13:58.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a02de466-96d4-4242-b62d-c4bd0e82176d" in namespace "projected-8187" to be "success or failure"
Sep 18 03:13:58.046: INFO: Pod "downwardapi-volume-a02de466-96d4-4242-b62d-c4bd0e82176d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.815803ms
Sep 18 03:14:00.055: INFO: Pod "downwardapi-volume-a02de466-96d4-4242-b62d-c4bd0e82176d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022178931s
Sep 18 03:14:02.067: INFO: Pod "downwardapi-volume-a02de466-96d4-4242-b62d-c4bd0e82176d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034501386s
STEP: Saw pod success
Sep 18 03:14:02.067: INFO: Pod "downwardapi-volume-a02de466-96d4-4242-b62d-c4bd0e82176d" satisfied condition "success or failure"
Sep 18 03:14:02.072: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a02de466-96d4-4242-b62d-c4bd0e82176d container client-container: 
STEP: delete the pod
Sep 18 03:14:02.093: INFO: Waiting for pod downwardapi-volume-a02de466-96d4-4242-b62d-c4bd0e82176d to disappear
Sep 18 03:14:02.097: INFO: Pod downwardapi-volume-a02de466-96d4-4242-b62d-c4bd0e82176d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:14:02.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8187" for this suite.
Sep 18 03:14:08.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:14:08.267: INFO: namespace projected-8187 deletion completed in 6.160626704s

• [SLOW TEST:10.283 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:14:08.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-a1937e90-f0db-47b4-a448-fbb8bd903251
STEP: Creating a pod to test consume configMaps
Sep 18 03:14:08.352: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-24c4a32f-e0f8-4008-8f30-227702f42780" in namespace "projected-20" to be "success or failure"
Sep 18 03:14:08.408: INFO: Pod "pod-projected-configmaps-24c4a32f-e0f8-4008-8f30-227702f42780": Phase="Pending", Reason="", readiness=false. Elapsed: 55.859934ms
Sep 18 03:14:10.416: INFO: Pod "pod-projected-configmaps-24c4a32f-e0f8-4008-8f30-227702f42780": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063789156s
Sep 18 03:14:12.424: INFO: Pod "pod-projected-configmaps-24c4a32f-e0f8-4008-8f30-227702f42780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071845211s
STEP: Saw pod success
Sep 18 03:14:12.425: INFO: Pod "pod-projected-configmaps-24c4a32f-e0f8-4008-8f30-227702f42780" satisfied condition "success or failure"
Sep 18 03:14:12.430: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-24c4a32f-e0f8-4008-8f30-227702f42780 container projected-configmap-volume-test: 
STEP: delete the pod
Sep 18 03:14:12.464: INFO: Waiting for pod pod-projected-configmaps-24c4a32f-e0f8-4008-8f30-227702f42780 to disappear
Sep 18 03:14:12.488: INFO: Pod pod-projected-configmaps-24c4a32f-e0f8-4008-8f30-227702f42780 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:14:12.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-20" for this suite.
Sep 18 03:14:18.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:14:18.660: INFO: namespace projected-20 deletion completed in 6.161796904s

• [SLOW TEST:10.390 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:14:18.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:14:18.814: INFO: Create a RollingUpdate DaemonSet
Sep 18 03:14:18.821: INFO: Check that daemon pods launch on every node of the cluster
Sep 18 03:14:18.834: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:14:18.852: INFO: Number of nodes with available pods: 0
Sep 18 03:14:18.852: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:14:19.862: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:14:19.869: INFO: Number of nodes with available pods: 0
Sep 18 03:14:19.869: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:14:20.929: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:14:20.935: INFO: Number of nodes with available pods: 0
Sep 18 03:14:20.935: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:14:21.872: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:14:21.880: INFO: Number of nodes with available pods: 0
Sep 18 03:14:21.880: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:14:22.863: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:14:22.870: INFO: Number of nodes with available pods: 1
Sep 18 03:14:22.870: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:14:23.865: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:14:23.871: INFO: Number of nodes with available pods: 2
Sep 18 03:14:23.871: INFO: Number of running nodes: 2, number of available pods: 2
Sep 18 03:14:23.872: INFO: Update the DaemonSet to trigger a rollout
Sep 18 03:14:23.884: INFO: Updating DaemonSet daemon-set
Sep 18 03:14:34.930: INFO: Roll back the DaemonSet before rollout is complete
Sep 18 03:14:34.941: INFO: Updating DaemonSet daemon-set
Sep 18 03:14:34.941: INFO: Make sure DaemonSet rollback is complete
Sep 18 03:14:34.950: INFO: Wrong image for pod: daemon-set-mcxgq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Sep 18 03:14:34.950: INFO: Pod daemon-set-mcxgq is not available
Sep 18 03:14:34.978: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:14:35.988: INFO: Wrong image for pod: daemon-set-mcxgq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Sep 18 03:14:35.989: INFO: Pod daemon-set-mcxgq is not available
Sep 18 03:14:35.997: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:14:37.083: INFO: Pod daemon-set-mxhzj is not available
Sep 18 03:14:37.204: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1637, will wait for the garbage collector to delete the pods
Sep 18 03:14:37.327: INFO: Deleting DaemonSet.extensions daemon-set took: 33.381456ms
Sep 18 03:14:37.628: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.845898ms
Sep 18 03:14:44.635: INFO: Number of nodes with available pods: 0
Sep 18 03:14:44.635: INFO: Number of running nodes: 0, number of available pods: 0
Sep 18 03:14:44.639: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1637/daemonsets","resourceVersion":"795127"},"items":null}

Sep 18 03:14:44.642: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1637/pods","resourceVersion":"795127"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:14:44.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1637" for this suite.
Sep 18 03:14:50.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:14:50.890: INFO: namespace daemonsets-1637 deletion completed in 6.217230369s

• [SLOW TEST:32.230 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:14:50.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2720.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2720.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 18 03:14:57.040: INFO: DNS probes using dns-2720/dns-test-48771db3-7cd2-483a-ba41-3bded0eb3c28 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:14:57.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2720" for this suite.
Sep 18 03:15:03.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:15:03.258: INFO: namespace dns-2720 deletion completed in 6.170847206s

• [SLOW TEST:12.365 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:15:03.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 18 03:15:03.331: INFO: Waiting up to 5m0s for pod "pod-31b0c465-a4e5-43bd-bc1e-3e7dcf85ff65" in namespace "emptydir-2042" to be "success or failure"
Sep 18 03:15:03.338: INFO: Pod "pod-31b0c465-a4e5-43bd-bc1e-3e7dcf85ff65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.830428ms
Sep 18 03:15:05.345: INFO: Pod "pod-31b0c465-a4e5-43bd-bc1e-3e7dcf85ff65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013790401s
Sep 18 03:15:07.353: INFO: Pod "pod-31b0c465-a4e5-43bd-bc1e-3e7dcf85ff65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021372435s
STEP: Saw pod success
Sep 18 03:15:07.353: INFO: Pod "pod-31b0c465-a4e5-43bd-bc1e-3e7dcf85ff65" satisfied condition "success or failure"
Sep 18 03:15:07.358: INFO: Trying to get logs from node iruya-worker2 pod pod-31b0c465-a4e5-43bd-bc1e-3e7dcf85ff65 container test-container: 
STEP: delete the pod
Sep 18 03:15:07.376: INFO: Waiting for pod pod-31b0c465-a4e5-43bd-bc1e-3e7dcf85ff65 to disappear
Sep 18 03:15:07.380: INFO: Pod pod-31b0c465-a4e5-43bd-bc1e-3e7dcf85ff65 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:15:07.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2042" for this suite.
Sep 18 03:15:13.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:15:13.571: INFO: namespace emptydir-2042 deletion completed in 6.182301274s

• [SLOW TEST:10.311 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:15:13.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:15:13.660: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 18 03:15:20.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6011'
Sep 18 03:15:21.184: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 18 03:15:21.184: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Sep 18 03:15:21.193: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-rqprc]
Sep 18 03:15:21.193: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-rqprc" in namespace "kubectl-6011" to be "running and ready"
Sep 18 03:15:21.195: INFO: Pod "e2e-test-nginx-rc-rqprc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489987ms
Sep 18 03:15:23.202: INFO: Pod "e2e-test-nginx-rc-rqprc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009408817s
Sep 18 03:15:25.229: INFO: Pod "e2e-test-nginx-rc-rqprc": Phase="Running", Reason="", readiness=true. Elapsed: 4.036292792s
Sep 18 03:15:25.230: INFO: Pod "e2e-test-nginx-rc-rqprc" satisfied condition "running and ready"
Sep 18 03:15:25.230: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-rqprc]
Sep 18 03:15:25.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6011'
Sep 18 03:15:26.447: INFO: stderr: ""
Sep 18 03:15:26.447: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Sep 18 03:15:26.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6011'
Sep 18 03:15:27.572: INFO: stderr: ""
Sep 18 03:15:27.572: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:15:27.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6011" for this suite.
Sep 18 03:15:49.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:15:49.758: INFO: namespace kubectl-6011 deletion completed in 22.176085347s

• [SLOW TEST:29.805 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:15:49.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-1c6775db-2b44-4552-9adc-4663e2e1885d
STEP: Creating secret with name secret-projected-all-test-volume-5b27294e-f720-4e67-b69d-491c644fc261
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep 18 03:15:49.898: INFO: Waiting up to 5m0s for pod "projected-volume-619a8700-5eae-4ddc-bfa2-d0b13dec333e" in namespace "projected-4395" to be "success or failure"
Sep 18 03:15:49.928: INFO: Pod "projected-volume-619a8700-5eae-4ddc-bfa2-d0b13dec333e": Phase="Pending", Reason="", readiness=false. Elapsed: 29.641274ms
Sep 18 03:15:51.969: INFO: Pod "projected-volume-619a8700-5eae-4ddc-bfa2-d0b13dec333e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070265117s
Sep 18 03:15:53.976: INFO: Pod "projected-volume-619a8700-5eae-4ddc-bfa2-d0b13dec333e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077900537s
STEP: Saw pod success
Sep 18 03:15:53.977: INFO: Pod "projected-volume-619a8700-5eae-4ddc-bfa2-d0b13dec333e" satisfied condition "success or failure"
Sep 18 03:15:53.982: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-619a8700-5eae-4ddc-bfa2-d0b13dec333e container projected-all-volume-test: 
STEP: delete the pod
Sep 18 03:15:54.006: INFO: Waiting for pod projected-volume-619a8700-5eae-4ddc-bfa2-d0b13dec333e to disappear
Sep 18 03:15:54.010: INFO: Pod projected-volume-619a8700-5eae-4ddc-bfa2-d0b13dec333e no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:15:54.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4395" for this suite.
Sep 18 03:16:00.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:16:00.195: INFO: namespace projected-4395 deletion completed in 6.177494388s

• [SLOW TEST:10.436 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:16:00.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0918 03:16:11.840376       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 18 03:16:11.840: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:16:11.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6830" for this suite.
Sep 18 03:16:21.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:16:22.007: INFO: namespace gc-6830 deletion completed in 10.157384005s

• [SLOW TEST:21.809 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:16:22.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-503e0b2a-87f0-48af-93b1-f3c4f6763dac
STEP: Creating a pod to test consume secrets
Sep 18 03:16:22.101: INFO: Waiting up to 5m0s for pod "pod-secrets-fa835bc3-3ef5-4a70-8c89-dd982efceb6e" in namespace "secrets-9978" to be "success or failure"
Sep 18 03:16:22.109: INFO: Pod "pod-secrets-fa835bc3-3ef5-4a70-8c89-dd982efceb6e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.796518ms
Sep 18 03:16:24.115: INFO: Pod "pod-secrets-fa835bc3-3ef5-4a70-8c89-dd982efceb6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014412964s
Sep 18 03:16:26.121: INFO: Pod "pod-secrets-fa835bc3-3ef5-4a70-8c89-dd982efceb6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020070378s
STEP: Saw pod success
Sep 18 03:16:26.121: INFO: Pod "pod-secrets-fa835bc3-3ef5-4a70-8c89-dd982efceb6e" satisfied condition "success or failure"
Sep 18 03:16:26.125: INFO: Trying to get logs from node iruya-worker pod pod-secrets-fa835bc3-3ef5-4a70-8c89-dd982efceb6e container secret-volume-test: 
STEP: delete the pod
Sep 18 03:16:26.293: INFO: Waiting for pod pod-secrets-fa835bc3-3ef5-4a70-8c89-dd982efceb6e to disappear
Sep 18 03:16:26.313: INFO: Pod pod-secrets-fa835bc3-3ef5-4a70-8c89-dd982efceb6e no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:16:26.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9978" for this suite.
Sep 18 03:16:32.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:16:32.468: INFO: namespace secrets-9978 deletion completed in 6.146997645s

• [SLOW TEST:10.460 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:16:32.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:16:32.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Sep 18 03:16:33.673: INFO: stderr: ""
Sep 18 03:16:33.674: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:31:02Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:16:33.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7476" for this suite.
Sep 18 03:16:39.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:16:39.849: INFO: namespace kubectl-7476 deletion completed in 6.165911793s

• [SLOW TEST:7.381 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:16:39.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Sep 18 03:16:39.968: INFO: Waiting up to 5m0s for pod "client-containers-0f6e148e-e2ef-4e47-a815-7babdba5fae8" in namespace "containers-2543" to be "success or failure"
Sep 18 03:16:39.976: INFO: Pod "client-containers-0f6e148e-e2ef-4e47-a815-7babdba5fae8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.346865ms
Sep 18 03:16:42.128: INFO: Pod "client-containers-0f6e148e-e2ef-4e47-a815-7babdba5fae8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159247872s
Sep 18 03:16:44.134: INFO: Pod "client-containers-0f6e148e-e2ef-4e47-a815-7babdba5fae8": Phase="Running", Reason="", readiness=true. Elapsed: 4.166032446s
Sep 18 03:16:46.141: INFO: Pod "client-containers-0f6e148e-e2ef-4e47-a815-7babdba5fae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.173175533s
STEP: Saw pod success
Sep 18 03:16:46.142: INFO: Pod "client-containers-0f6e148e-e2ef-4e47-a815-7babdba5fae8" satisfied condition "success or failure"
Sep 18 03:16:46.150: INFO: Trying to get logs from node iruya-worker2 pod client-containers-0f6e148e-e2ef-4e47-a815-7babdba5fae8 container test-container: 
STEP: delete the pod
Sep 18 03:16:46.214: INFO: Waiting for pod client-containers-0f6e148e-e2ef-4e47-a815-7babdba5fae8 to disappear
Sep 18 03:16:46.219: INFO: Pod client-containers-0f6e148e-e2ef-4e47-a815-7babdba5fae8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:16:46.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2543" for this suite.
Sep 18 03:16:52.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:16:52.370: INFO: namespace containers-2543 deletion completed in 6.141338501s

• [SLOW TEST:12.513 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:16:52.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep 18 03:16:52.535: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:16:52.550: INFO: Number of nodes with available pods: 0
Sep 18 03:16:52.550: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:16:53.727: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:16:53.733: INFO: Number of nodes with available pods: 0
Sep 18 03:16:53.733: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:16:54.685: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:16:54.691: INFO: Number of nodes with available pods: 0
Sep 18 03:16:54.692: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:16:55.661: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:16:55.667: INFO: Number of nodes with available pods: 0
Sep 18 03:16:55.667: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:16:56.561: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:16:56.567: INFO: Number of nodes with available pods: 0
Sep 18 03:16:56.567: INFO: Node iruya-worker is running more than one daemon pod
Sep 18 03:16:57.562: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:16:57.568: INFO: Number of nodes with available pods: 2
Sep 18 03:16:57.568: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Sep 18 03:16:57.634: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:16:57.640: INFO: Number of nodes with available pods: 1
Sep 18 03:16:57.640: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 18 03:16:58.653: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:16:58.661: INFO: Number of nodes with available pods: 1
Sep 18 03:16:58.661: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 18 03:16:59.683: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:16:59.689: INFO: Number of nodes with available pods: 1
Sep 18 03:16:59.690: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 18 03:17:00.653: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:17:00.661: INFO: Number of nodes with available pods: 1
Sep 18 03:17:00.661: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 18 03:17:01.654: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:17:01.660: INFO: Number of nodes with available pods: 1
Sep 18 03:17:01.660: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 18 03:17:02.653: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:17:02.659: INFO: Number of nodes with available pods: 1
Sep 18 03:17:02.659: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 18 03:17:03.653: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:17:03.659: INFO: Number of nodes with available pods: 1
Sep 18 03:17:03.659: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 18 03:17:04.652: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 18 03:17:04.659: INFO: Number of nodes with available pods: 2
Sep 18 03:17:04.659: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5325, will wait for the garbage collector to delete the pods
Sep 18 03:17:04.727: INFO: Deleting DaemonSet.extensions daemon-set took: 8.108313ms
Sep 18 03:17:05.028: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.860135ms
Sep 18 03:17:14.634: INFO: Number of nodes with available pods: 0
Sep 18 03:17:14.634: INFO: Number of running nodes: 0, number of available pods: 0
Sep 18 03:17:14.638: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5325/daemonsets","resourceVersion":"795885"},"items":null}

Sep 18 03:17:14.642: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5325/pods","resourceVersion":"795885"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:17:14.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5325" for this suite.
Sep 18 03:17:20.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:17:20.841: INFO: namespace daemonsets-5325 deletion completed in 6.152967472s

• [SLOW TEST:28.470 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:17:20.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Sep 18 03:17:20.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2323 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Sep 18 03:17:25.596: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0918 03:17:25.456892    2385 log.go:172] (0x2ac8070) (0x2ac80e0) Create stream\nI0918 03:17:25.459889    2385 log.go:172] (0x2ac8070) (0x2ac80e0) Stream added, broadcasting: 1\nI0918 03:17:25.474737    2385 log.go:172] (0x2ac8070) Reply frame received for 1\nI0918 03:17:25.475189    2385 log.go:172] (0x2ac8070) (0x28345b0) Create stream\nI0918 03:17:25.475260    2385 log.go:172] (0x2ac8070) (0x28345b0) Stream added, broadcasting: 3\nI0918 03:17:25.476519    2385 log.go:172] (0x2ac8070) Reply frame received for 3\nI0918 03:17:25.476740    2385 log.go:172] (0x2ac8070) (0x24aa700) Create stream\nI0918 03:17:25.476806    2385 log.go:172] (0x2ac8070) (0x24aa700) Stream added, broadcasting: 5\nI0918 03:17:25.477844    2385 log.go:172] (0x2ac8070) Reply frame received for 5\nI0918 03:17:25.478045    2385 log.go:172] (0x2ac8070) (0x2834620) Create stream\nI0918 03:17:25.478099    2385 log.go:172] (0x2ac8070) (0x2834620) Stream added, broadcasting: 7\nI0918 03:17:25.479100    2385 log.go:172] (0x2ac8070) Reply frame received for 7\nI0918 03:17:25.480780    2385 log.go:172] (0x28345b0) (3) Writing data frame\nI0918 03:17:25.481684    2385 log.go:172] (0x28345b0) (3) Writing data frame\nI0918 03:17:25.482634    2385 log.go:172] (0x2ac8070) Data frame received for 5\nI0918 03:17:25.482820    2385 log.go:172] (0x24aa700) (5) Data frame handling\nI0918 03:17:25.483062    2385 log.go:172] (0x24aa700) (5) Data frame sent\nI0918 03:17:25.483321    2385 log.go:172] (0x2ac8070) Data frame received for 5\nI0918 03:17:25.483384    2385 log.go:172] (0x24aa700) (5) Data frame handling\nI0918 03:17:25.483468    2385 log.go:172] (0x24aa700) (5) Data frame sent\nI0918 03:17:25.520199    2385 log.go:172] (0x2ac8070) Data frame received for 7\nI0918 03:17:25.520360    2385 log.go:172] (0x2834620) (7) Data frame handling\nI0918 03:17:25.520595    2385 log.go:172] (0x2ac8070) Data frame received for 5\nI0918 03:17:25.520832    2385 log.go:172] (0x24aa700) (5) Data frame handling\nI0918 03:17:25.521128    2385 log.go:172] (0x2ac8070) Data frame received for 1\nI0918 03:17:25.521340    2385 log.go:172] (0x2ac80e0) (1) Data frame handling\nI0918 03:17:25.521534    2385 log.go:172] (0x2ac80e0) (1) Data frame sent\nI0918 03:17:25.522470    2385 log.go:172] (0x2ac8070) (0x2ac80e0) Stream removed, broadcasting: 1\nI0918 03:17:25.523198    2385 log.go:172] (0x2ac8070) (0x28345b0) Stream removed, broadcasting: 3\nI0918 03:17:25.527045    2385 log.go:172] (0x2ac8070) Go away received\nI0918 03:17:25.527203    2385 log.go:172] (0x2ac8070) (0x2ac80e0) Stream removed, broadcasting: 1\nI0918 03:17:25.527797    2385 log.go:172] (0x2ac8070) (0x28345b0) Stream removed, broadcasting: 3\nI0918 03:17:25.527903    2385 log.go:172] (0x2ac8070) (0x24aa700) Stream removed, broadcasting: 5\nI0918 03:17:25.528612    2385 log.go:172] (0x2ac8070) (0x2834620) Stream removed, broadcasting: 7\n"
Sep 18 03:17:25.597: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:17:27.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2323" for this suite.
Sep 18 03:17:33.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:17:33.768: INFO: namespace kubectl-2323 deletion completed in 6.15257778s

• [SLOW TEST:12.925 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:17:33.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-ddwwz in namespace proxy-2102
I0918 03:17:33.870181       7 runners.go:180] Created replication controller with name: proxy-service-ddwwz, namespace: proxy-2102, replica count: 1
I0918 03:17:34.922194       7 runners.go:180] proxy-service-ddwwz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0918 03:17:35.922943       7 runners.go:180] proxy-service-ddwwz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0918 03:17:36.923812       7 runners.go:180] proxy-service-ddwwz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0918 03:17:37.924861       7 runners.go:180] proxy-service-ddwwz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0918 03:17:38.925741       7 runners.go:180] proxy-service-ddwwz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0918 03:17:39.926592       7 runners.go:180] proxy-service-ddwwz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0918 03:17:40.927289       7 runners.go:180] proxy-service-ddwwz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 18 03:17:40.939: INFO: setup took 7.119999799s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Sep 18 03:17:40.950: INFO: (0) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 9.876417ms)
Sep 18 03:17:40.954: INFO: (0) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 12.386404ms)
Sep 18 03:17:40.954: INFO: (0) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 12.85012ms)
Sep 18 03:17:40.955: INFO: (0) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 13.287238ms)
Sep 18 03:17:40.955: INFO: (0) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 13.966786ms)
Sep 18 03:17:40.955: INFO: (0) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 14.30016ms)
Sep 18 03:17:40.955: INFO: (0) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 14.667907ms)
Sep 18 03:17:40.955: INFO: (0) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 14.987794ms)
Sep 18 03:17:40.955: INFO: (0) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 15.188229ms)
Sep 18 03:17:40.955: INFO: (0) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 14.020364ms)
Sep 18 03:17:40.956: INFO: (0) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 15.427654ms)
Sep 18 03:17:40.956: INFO: (0) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 15.862867ms)
Sep 18 03:17:40.960: INFO: (0) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 19.336424ms)
Sep 18 03:17:40.960: INFO: (0) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 18.87588ms)
Sep 18 03:17:40.960: INFO: (0) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test<... (200; 17.228103ms)
Sep 18 03:17:40.980: INFO: (1) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test (200; 18.248653ms)
Sep 18 03:17:40.981: INFO: (1) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 18.115242ms)
Sep 18 03:17:40.981: INFO: (1) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 18.461643ms)
Sep 18 03:17:40.981: INFO: (1) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 18.597269ms)
Sep 18 03:17:40.981: INFO: (1) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 18.579122ms)
Sep 18 03:17:40.983: INFO: (1) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 20.372764ms)
Sep 18 03:17:40.984: INFO: (1) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 20.830643ms)
Sep 18 03:17:40.984: INFO: (1) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 21.311874ms)
Sep 18 03:17:40.984: INFO: (1) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 21.087407ms)
Sep 18 03:17:40.984: INFO: (1) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 21.182817ms)
Sep 18 03:17:40.995: INFO: (2) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 10.172212ms)
Sep 18 03:17:40.995: INFO: (2) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 9.29059ms)
Sep 18 03:17:40.996: INFO: (2) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test<... (200; 9.942854ms)
Sep 18 03:17:40.996: INFO: (2) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 10.149962ms)
Sep 18 03:17:40.996: INFO: (2) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 11.338027ms)
Sep 18 03:17:40.997: INFO: (2) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 11.010674ms)
Sep 18 03:17:40.998: INFO: (2) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 13.427606ms)
Sep 18 03:17:41.000: INFO: (2) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 15.242607ms)
Sep 18 03:17:41.001: INFO: (2) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 15.47694ms)
Sep 18 03:17:41.001: INFO: (2) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 15.860416ms)
Sep 18 03:17:41.001: INFO: (2) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 14.796072ms)
Sep 18 03:17:41.001: INFO: (2) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 16.098787ms)
Sep 18 03:17:41.001: INFO: (2) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 16.765854ms)
Sep 18 03:17:41.001: INFO: (2) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 15.184906ms)
Sep 18 03:17:41.007: INFO: (3) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 5.458925ms)
Sep 18 03:17:41.007: INFO: (3) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 5.489078ms)
Sep 18 03:17:41.007: INFO: (3) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 5.901229ms)
Sep 18 03:17:41.007: INFO: (3) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 5.959276ms)
Sep 18 03:17:41.008: INFO: (3) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 6.146555ms)
Sep 18 03:17:41.008: INFO: (3) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 6.154522ms)
Sep 18 03:17:41.008: INFO: (3) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 6.307673ms)
Sep 18 03:17:41.008: INFO: (3) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 6.831475ms)
Sep 18 03:17:41.008: INFO: (3) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 6.905876ms)
Sep 18 03:17:41.008: INFO: (3) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test<... (200; 7.335084ms)
Sep 18 03:17:41.009: INFO: (3) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 7.709138ms)
Sep 18 03:17:41.009: INFO: (3) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 7.274256ms)
Sep 18 03:17:41.013: INFO: (4) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 3.505817ms)
Sep 18 03:17:41.013: INFO: (4) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test<... (200; 4.708745ms)
Sep 18 03:17:41.014: INFO: (4) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 4.905912ms)
Sep 18 03:17:41.014: INFO: (4) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 4.849805ms)
Sep 18 03:17:41.015: INFO: (4) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 5.29822ms)
Sep 18 03:17:41.015: INFO: (4) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 5.862987ms)
Sep 18 03:17:41.016: INFO: (4) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 6.376497ms)
Sep 18 03:17:41.016: INFO: (4) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 7.074372ms)
Sep 18 03:17:41.017: INFO: (4) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 7.162365ms)
Sep 18 03:17:41.017: INFO: (4) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 7.277629ms)
Sep 18 03:17:41.017: INFO: (4) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 7.409324ms)
Sep 18 03:17:41.017: INFO: (4) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 7.744356ms)
Sep 18 03:17:41.017: INFO: (4) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 7.991927ms)
Sep 18 03:17:41.018: INFO: (4) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 8.300965ms)
Sep 18 03:17:41.018: INFO: (4) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 8.791009ms)
Sep 18 03:17:41.022: INFO: (5) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 3.184975ms)
Sep 18 03:17:41.023: INFO: (5) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 4.337904ms)
Sep 18 03:17:41.023: INFO: (5) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 4.371656ms)
Sep 18 03:17:41.024: INFO: (5) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 5.914429ms)
Sep 18 03:17:41.025: INFO: (5) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 6.245986ms)
Sep 18 03:17:41.025: INFO: (5) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 6.290208ms)
Sep 18 03:17:41.025: INFO: (5) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test<... (200; 7.601109ms)
Sep 18 03:17:41.026: INFO: (5) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 7.78813ms)
Sep 18 03:17:41.027: INFO: (5) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 7.99914ms)
Sep 18 03:17:41.027: INFO: (5) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 8.445072ms)
Sep 18 03:17:41.031: INFO: (6) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 3.94905ms)
Sep 18 03:17:41.032: INFO: (6) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 4.482464ms)
Sep 18 03:17:41.032: INFO: (6) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 5.152815ms)
Sep 18 03:17:41.033: INFO: (6) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 5.018176ms)
Sep 18 03:17:41.034: INFO: (6) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 6.512657ms)
Sep 18 03:17:41.034: INFO: (6) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test (200; 6.868226ms)
Sep 18 03:17:41.034: INFO: (6) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 7.014979ms)
Sep 18 03:17:41.035: INFO: (6) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 7.189095ms)
Sep 18 03:17:41.035: INFO: (6) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 7.23394ms)
Sep 18 03:17:41.035: INFO: (6) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 7.353865ms)
Sep 18 03:17:41.035: INFO: (6) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 7.767416ms)
Sep 18 03:17:41.035: INFO: (6) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 7.868536ms)
Sep 18 03:17:41.035: INFO: (6) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 7.890451ms)
Sep 18 03:17:41.036: INFO: (6) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 8.332909ms)
Sep 18 03:17:41.036: INFO: (6) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 8.844947ms)
Sep 18 03:17:41.040: INFO: (7) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 3.216741ms)
Sep 18 03:17:41.041: INFO: (7) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 4.057899ms)
Sep 18 03:17:41.041: INFO: (7) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 4.390664ms)
Sep 18 03:17:41.043: INFO: (7) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 6.21039ms)
Sep 18 03:17:41.043: INFO: (7) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 6.581363ms)
Sep 18 03:17:41.044: INFO: (7) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 7.176949ms)
Sep 18 03:17:41.044: INFO: (7) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 7.264731ms)
Sep 18 03:17:41.044: INFO: (7) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 7.359449ms)
Sep 18 03:17:41.044: INFO: (7) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 7.178183ms)
Sep 18 03:17:41.044: INFO: (7) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 7.37231ms)
Sep 18 03:17:41.044: INFO: (7) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 7.38206ms)
Sep 18 03:17:41.044: INFO: (7) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 7.33724ms)
Sep 18 03:17:41.044: INFO: (7) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 7.440492ms)
Sep 18 03:17:41.045: INFO: (7) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 8.145419ms)
Sep 18 03:17:41.046: INFO: (7) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test (200; 5.473475ms)
Sep 18 03:17:41.054: INFO: (8) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 5.706172ms)
Sep 18 03:17:41.054: INFO: (8) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 5.93642ms)
Sep 18 03:17:41.054: INFO: (8) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 5.961539ms)
Sep 18 03:17:41.055: INFO: (8) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 6.010876ms)
Sep 18 03:17:41.055: INFO: (8) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 6.406029ms)
Sep 18 03:17:41.055: INFO: (8) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 6.669003ms)
Sep 18 03:17:41.055: INFO: (8) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 6.727756ms)
Sep 18 03:17:41.055: INFO: (8) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test (200; 5.988911ms)
Sep 18 03:17:41.062: INFO: (9) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 6.066865ms)
Sep 18 03:17:41.063: INFO: (9) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 6.893661ms)
Sep 18 03:17:41.064: INFO: (9) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 7.86472ms)
Sep 18 03:17:41.064: INFO: (9) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 7.884492ms)
Sep 18 03:17:41.064: INFO: (9) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 7.887752ms)
Sep 18 03:17:41.064: INFO: (9) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 8.388871ms)
Sep 18 03:17:41.065: INFO: (9) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 8.401543ms)
Sep 18 03:17:41.067: INFO: (9) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 10.451673ms)
Sep 18 03:17:41.067: INFO: (9) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: ... (200; 6.142477ms)
Sep 18 03:17:41.075: INFO: (10) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 6.457708ms)
Sep 18 03:17:41.075: INFO: (10) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 6.750198ms)
Sep 18 03:17:41.075: INFO: (10) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test<... (200; 7.103027ms)
Sep 18 03:17:41.076: INFO: (10) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 7.144291ms)
Sep 18 03:17:41.076: INFO: (10) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 7.23698ms)
Sep 18 03:17:41.076: INFO: (10) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 7.485037ms)
Sep 18 03:17:41.076: INFO: (10) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 7.519327ms)
Sep 18 03:17:41.076: INFO: (10) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 7.985334ms)
Sep 18 03:17:41.076: INFO: (10) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 7.720264ms)
Sep 18 03:17:41.076: INFO: (10) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 8.143001ms)
Sep 18 03:17:41.080: INFO: (11) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 3.917076ms)
Sep 18 03:17:41.081: INFO: (11) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 4.603032ms)
Sep 18 03:17:41.081: INFO: (11) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 4.994959ms)
Sep 18 03:17:41.082: INFO: (11) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 5.418264ms)
Sep 18 03:17:41.083: INFO: (11) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test (200; 7.576471ms)
Sep 18 03:17:41.084: INFO: (11) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 7.714793ms)
Sep 18 03:17:41.084: INFO: (11) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 7.73398ms)
Sep 18 03:17:41.085: INFO: (11) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 8.02374ms)
Sep 18 03:17:41.085: INFO: (11) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 7.841112ms)
Sep 18 03:17:41.085: INFO: (11) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 7.9092ms)
Sep 18 03:17:41.089: INFO: (12) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 3.934736ms)
Sep 18 03:17:41.089: INFO: (12) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 4.376081ms)
Sep 18 03:17:41.090: INFO: (12) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 5.410127ms)
Sep 18 03:17:41.091: INFO: (12) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 5.589829ms)
Sep 18 03:17:41.091: INFO: (12) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 6.37169ms)
Sep 18 03:17:41.091: INFO: (12) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 6.35479ms)
Sep 18 03:17:41.091: INFO: (12) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 6.531885ms)
Sep 18 03:17:41.092: INFO: (12) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: ... (200; 6.734207ms)
Sep 18 03:17:41.092: INFO: (12) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 6.984962ms)
Sep 18 03:17:41.092: INFO: (12) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 7.093462ms)
Sep 18 03:17:41.093: INFO: (12) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 7.599427ms)
Sep 18 03:17:41.093: INFO: (12) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 7.63479ms)
Sep 18 03:17:41.093: INFO: (12) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 7.816992ms)
Sep 18 03:17:41.093: INFO: (12) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 7.928691ms)
Sep 18 03:17:41.098: INFO: (13) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 4.542932ms)
Sep 18 03:17:41.098: INFO: (13) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 5.195416ms)
Sep 18 03:17:41.099: INFO: (13) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 4.971874ms)
Sep 18 03:17:41.099: INFO: (13) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 5.814928ms)
Sep 18 03:17:41.100: INFO: (13) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 6.334427ms)
Sep 18 03:17:41.100: INFO: (13) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 6.810876ms)
Sep 18 03:17:41.100: INFO: (13) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 7.024229ms)
Sep 18 03:17:41.101: INFO: (13) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 7.393944ms)
Sep 18 03:17:41.101: INFO: (13) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 7.545402ms)
Sep 18 03:17:41.101: INFO: (13) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 7.82864ms)
Sep 18 03:17:41.102: INFO: (13) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: ... (200; 9.608267ms)
Sep 18 03:17:41.108: INFO: (14) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 4.598216ms)
Sep 18 03:17:41.108: INFO: (14) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 4.633597ms)
Sep 18 03:17:41.108: INFO: (14) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 4.776331ms)
Sep 18 03:17:41.109: INFO: (14) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 5.11177ms)
Sep 18 03:17:41.109: INFO: (14) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 5.941048ms)
Sep 18 03:17:41.109: INFO: (14) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 6.005665ms)
Sep 18 03:17:41.110: INFO: (14) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 6.68409ms)
Sep 18 03:17:41.111: INFO: (14) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 7.292137ms)
Sep 18 03:17:41.112: INFO: (14) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 8.089577ms)
Sep 18 03:17:41.112: INFO: (14) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 8.16562ms)
Sep 18 03:17:41.112: INFO: (14) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 8.129904ms)
Sep 18 03:17:41.113: INFO: (14) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 9.547185ms)
Sep 18 03:17:41.114: INFO: (14) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 10.075244ms)
Sep 18 03:17:41.114: INFO: (14) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 10.676071ms)
Sep 18 03:17:41.115: INFO: (14) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 11.386866ms)
Sep 18 03:17:41.115: INFO: (14) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test (200; 3.295241ms)
Sep 18 03:17:41.119: INFO: (15) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 4.030522ms)
Sep 18 03:17:41.120: INFO: (15) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 4.054495ms)
Sep 18 03:17:41.121: INFO: (15) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 4.910979ms)
Sep 18 03:17:41.121: INFO: (15) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 5.081362ms)
Sep 18 03:17:41.122: INFO: (15) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 6.746722ms)
Sep 18 03:17:41.122: INFO: (15) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 6.768545ms)
Sep 18 03:17:41.122: INFO: (15) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: ... (200; 7.760542ms)
Sep 18 03:17:41.129: INFO: (16) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 4.573201ms)
Sep 18 03:17:41.129: INFO: (16) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 5.02957ms)
Sep 18 03:17:41.130: INFO: (16) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 6.151267ms)
Sep 18 03:17:41.130: INFO: (16) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 6.200721ms)
Sep 18 03:17:41.131: INFO: (16) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 6.701462ms)
Sep 18 03:17:41.131: INFO: (16) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 6.913477ms)
Sep 18 03:17:41.132: INFO: (16) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 7.431558ms)
Sep 18 03:17:41.132: INFO: (16) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 7.877939ms)
Sep 18 03:17:41.132: INFO: (16) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 8.103953ms)
Sep 18 03:17:41.132: INFO: (16) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 8.102625ms)
Sep 18 03:17:41.133: INFO: (16) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 8.495936ms)
Sep 18 03:17:41.133: INFO: (16) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 8.365317ms)
Sep 18 03:17:41.133: INFO: (16) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test<... (200; 10.030234ms)
Sep 18 03:17:41.144: INFO: (17) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test (200; 11.777548ms)
Sep 18 03:17:41.147: INFO: (17) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 11.905013ms)
Sep 18 03:17:41.147: INFO: (17) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 12.461771ms)
Sep 18 03:17:41.147: INFO: (17) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 12.402842ms)
Sep 18 03:17:41.147: INFO: (17) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 12.318448ms)
Sep 18 03:17:41.147: INFO: (17) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 12.377708ms)
Sep 18 03:17:41.147: INFO: (17) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 12.945064ms)
Sep 18 03:17:41.148: INFO: (17) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 12.943748ms)
Sep 18 03:17:41.159: INFO: (18) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 10.646134ms)
Sep 18 03:17:41.159: INFO: (18) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 10.782128ms)
Sep 18 03:17:41.159: INFO: (18) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: ... (200; 11.227773ms)
Sep 18 03:17:41.159: INFO: (18) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:1080/proxy/: test<... (200; 11.324029ms)
Sep 18 03:17:41.159: INFO: (18) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 11.648893ms)
Sep 18 03:17:41.160: INFO: (18) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 11.450633ms)
Sep 18 03:17:41.160: INFO: (18) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname1/proxy/: foo (200; 11.875459ms)
Sep 18 03:17:41.160: INFO: (18) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 11.939778ms)
Sep 18 03:17:41.160: INFO: (18) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 11.890091ms)
Sep 18 03:17:41.160: INFO: (18) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 12.076547ms)
Sep 18 03:17:41.213: INFO: (18) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 65.624084ms)
Sep 18 03:17:41.213: INFO: (18) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 65.395615ms)
Sep 18 03:17:41.222: INFO: (19) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 7.102977ms)
Sep 18 03:17:41.222: INFO: (19) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:462/proxy/: tls qux (200; 7.562339ms)
Sep 18 03:17:41.223: INFO: (19) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq/proxy/: test (200; 8.786289ms)
Sep 18 03:17:41.223: INFO: (19) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:443/proxy/: test<... (200; 9.265245ms)
Sep 18 03:17:41.223: INFO: (19) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname1/proxy/: tls baz (200; 9.501569ms)
Sep 18 03:17:41.224: INFO: (19) /api/v1/namespaces/proxy-2102/pods/https:proxy-service-ddwwz-xfzbq:460/proxy/: tls baz (200; 9.527244ms)
Sep 18 03:17:41.224: INFO: (19) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 9.665815ms)
Sep 18 03:17:41.224: INFO: (19) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname2/proxy/: bar (200; 9.632382ms)
Sep 18 03:17:41.224: INFO: (19) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:160/proxy/: foo (200; 9.498895ms)
Sep 18 03:17:41.224: INFO: (19) /api/v1/namespaces/proxy-2102/pods/http:proxy-service-ddwwz-xfzbq:1080/proxy/: ... (200; 9.957661ms)
Sep 18 03:17:41.224: INFO: (19) /api/v1/namespaces/proxy-2102/pods/proxy-service-ddwwz-xfzbq:162/proxy/: bar (200; 9.821928ms)
Sep 18 03:17:41.225: INFO: (19) /api/v1/namespaces/proxy-2102/services/proxy-service-ddwwz:portname2/proxy/: bar (200; 10.567919ms)
Sep 18 03:17:41.225: INFO: (19) /api/v1/namespaces/proxy-2102/services/https:proxy-service-ddwwz:tlsportname2/proxy/: tls qux (200; 10.757723ms)
Sep 18 03:17:41.225: INFO: (19) /api/v1/namespaces/proxy-2102/services/http:proxy-service-ddwwz:portname1/proxy/: foo (200; 10.762065ms)
STEP: deleting ReplicationController proxy-service-ddwwz in namespace proxy-2102, will wait for the garbage collector to delete the pods
Sep 18 03:17:41.288: INFO: Deleting ReplicationController proxy-service-ddwwz took: 8.216249ms
Sep 18 03:17:41.589: INFO: Terminating ReplicationController proxy-service-ddwwz pods took: 300.954436ms
[AfterEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:17:54.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2102" for this suite.
Sep 18 03:18:00.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:18:00.750: INFO: namespace proxy-2102 deletion completed in 6.146905182s

• [SLOW TEST:26.980 seconds]
[sig-network] Proxy
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:18:00.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:18:00.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb9170f1-f8e8-468c-9f62-560bc493b9ee" in namespace "projected-800" to be "success or failure"
Sep 18 03:18:00.918: INFO: Pod "downwardapi-volume-fb9170f1-f8e8-468c-9f62-560bc493b9ee": Phase="Pending", Reason="", readiness=false. Elapsed: 85.329376ms
Sep 18 03:18:02.924: INFO: Pod "downwardapi-volume-fb9170f1-f8e8-468c-9f62-560bc493b9ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09134775s
Sep 18 03:18:04.932: INFO: Pod "downwardapi-volume-fb9170f1-f8e8-468c-9f62-560bc493b9ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099003535s
STEP: Saw pod success
Sep 18 03:18:04.932: INFO: Pod "downwardapi-volume-fb9170f1-f8e8-468c-9f62-560bc493b9ee" satisfied condition "success or failure"
Sep 18 03:18:04.945: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-fb9170f1-f8e8-468c-9f62-560bc493b9ee container client-container: 
STEP: delete the pod
Sep 18 03:18:04.987: INFO: Waiting for pod downwardapi-volume-fb9170f1-f8e8-468c-9f62-560bc493b9ee to disappear
Sep 18 03:18:04.992: INFO: Pod downwardapi-volume-fb9170f1-f8e8-468c-9f62-560bc493b9ee no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:18:04.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-800" for this suite.
Sep 18 03:18:11.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:18:11.190: INFO: namespace projected-800 deletion completed in 6.191834413s

• [SLOW TEST:10.438 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:18:11.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 18 03:18:11.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3888'
Sep 18 03:18:12.411: INFO: stderr: ""
Sep 18 03:18:12.411: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Sep 18 03:18:12.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3888'
Sep 18 03:18:24.515: INFO: stderr: ""
Sep 18 03:18:24.515: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:18:24.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3888" for this suite.
Sep 18 03:18:30.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:18:30.682: INFO: namespace kubectl-3888 deletion completed in 6.155285735s

• [SLOW TEST:19.490 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:18:30.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-11c3873e-cc1c-4374-8f61-0379d71a060d
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:18:36.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1455" for this suite.
Sep 18 03:18:58.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:18:59.030: INFO: namespace configmap-1455 deletion completed in 22.154283398s

• [SLOW TEST:28.346 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:18:59.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:18:59.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-851f0e96-e815-4bdd-93dc-216a98da8b20" in namespace "projected-2909" to be "success or failure"
Sep 18 03:18:59.134: INFO: Pod "downwardapi-volume-851f0e96-e815-4bdd-93dc-216a98da8b20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.953011ms
Sep 18 03:19:01.143: INFO: Pod "downwardapi-volume-851f0e96-e815-4bdd-93dc-216a98da8b20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015277638s
Sep 18 03:19:03.252: INFO: Pod "downwardapi-volume-851f0e96-e815-4bdd-93dc-216a98da8b20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124276024s
Sep 18 03:19:05.260: INFO: Pod "downwardapi-volume-851f0e96-e815-4bdd-93dc-216a98da8b20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132619853s
STEP: Saw pod success
Sep 18 03:19:05.260: INFO: Pod "downwardapi-volume-851f0e96-e815-4bdd-93dc-216a98da8b20" satisfied condition "success or failure"
Sep 18 03:19:05.266: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-851f0e96-e815-4bdd-93dc-216a98da8b20 container client-container: 
STEP: delete the pod
Sep 18 03:19:05.296: INFO: Waiting for pod downwardapi-volume-851f0e96-e815-4bdd-93dc-216a98da8b20 to disappear
Sep 18 03:19:05.302: INFO: Pod downwardapi-volume-851f0e96-e815-4bdd-93dc-216a98da8b20 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:19:05.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2909" for this suite.
Sep 18 03:19:11.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:19:11.491: INFO: namespace projected-2909 deletion completed in 6.178981039s

• [SLOW TEST:12.460 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:19:11.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:19:11.577: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Sep 18 03:19:11.601: INFO: Pod name sample-pod: Found 0 pods out of 1
Sep 18 03:19:16.609: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep 18 03:19:16.611: INFO: Creating deployment "test-rolling-update-deployment"
Sep 18 03:19:16.618: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Sep 18 03:19:16.652: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Sep 18 03:19:18.667: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Sep 18 03:19:18.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735995956, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735995956, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735995956, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735995956, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 18 03:19:20.680: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep 18 03:19:20.706: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1421,SelfLink:/apis/apps/v1/namespaces/deployment-1421/deployments/test-rolling-update-deployment,UID:03f19205-448d-4377-a7d5-d80445dc2c0d,ResourceVersion:796374,Generation:1,CreationTimestamp:2020-09-18 03:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-18 03:19:16 +0000 UTC 2020-09-18 03:19:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-18 03:19:19 +0000 UTC 2020-09-18 03:19:16 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Sep 18 03:19:20.717: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1421,SelfLink:/apis/apps/v1/namespaces/deployment-1421/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:bcb81227-fca6-425f-a7b1-cf1ae7066193,ResourceVersion:796362,Generation:1,CreationTimestamp:2020-09-18 03:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 03f19205-448d-4377-a7d5-d80445dc2c0d 0x9701a77 0x9701a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep 18 03:19:20.717: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Sep 18 03:19:20.719: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1421,SelfLink:/apis/apps/v1/namespaces/deployment-1421/replicasets/test-rolling-update-controller,UID:6bba527c-f68c-46ff-a819-a86880339c2e,ResourceVersion:796372,Generation:2,CreationTimestamp:2020-09-18 03:19:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 03f19205-448d-4377-a7d5-d80445dc2c0d 0x97019a7 0x97019a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 18 03:19:20.728: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-rcl7f" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-rcl7f,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1421,SelfLink:/api/v1/namespaces/deployment-1421/pods/test-rolling-update-deployment-79f6b9d75c-rcl7f,UID:8937e3c6-b720-4d43-b55f-87322cfb3d12,ResourceVersion:796361,Generation:0,CreationTimestamp:2020-09-18 03:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c bcb81227-fca6-425f-a7b1-cf1ae7066193 0x80b2587 0x80b2588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kx887 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kx887,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-kx887 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x80b2600} {node.kubernetes.io/unreachable Exists  NoExecute 0x80b2620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:19:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:19:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:19:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:19:16 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.39,StartTime:2020-09-18 03:19:16 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-18 03:19:19 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://c2e8d52eb77e859772f72a74d2ce3971c7a7c93a112608c30a5efd3f9d9f3bb0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:19:20.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1421" for this suite.
Sep 18 03:19:26.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:19:26.889: INFO: namespace deployment-1421 deletion completed in 6.151170504s

• [SLOW TEST:15.397 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:19:26.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Sep 18 03:19:26.968: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep 18 03:19:26.999: INFO: Waiting for terminating namespaces to be deleted...
Sep 18 03:19:27.004: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Sep 18 03:19:27.015: INFO: kube-proxy-xbqp2 from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 03:19:27.016: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 18 03:19:27.016: INFO: kindnet-85m7h from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 03:19:27.016: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 18 03:19:27.016: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Sep 18 03:19:27.027: INFO: kube-proxy-v7g67 from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 03:19:27.027: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 18 03:19:27.028: INFO: kindnet-jxh2j from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 03:19:27.028: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-22789c49-181d-4e76-b702-fe2dfac1e954 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-22789c49-181d-4e76-b702-fe2dfac1e954 off the node iruya-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-22789c49-181d-4e76-b702-fe2dfac1e954
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:19:35.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8602" for this suite.
Sep 18 03:19:47.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:19:47.400: INFO: namespace sched-pred-8602 deletion completed in 12.179319882s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:20.504 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:19:47.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-1d18a764-c94b-4554-8609-8fede5ff80ed
STEP: Creating a pod to test consume secrets
Sep 18 03:19:47.529: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f1c5c25c-00d1-4de5-9ec7-4fb0c971677c" in namespace "projected-1956" to be "success or failure"
Sep 18 03:19:47.557: INFO: Pod "pod-projected-secrets-f1c5c25c-00d1-4de5-9ec7-4fb0c971677c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.065191ms
Sep 18 03:19:49.565: INFO: Pod "pod-projected-secrets-f1c5c25c-00d1-4de5-9ec7-4fb0c971677c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036087991s
Sep 18 03:19:51.574: INFO: Pod "pod-projected-secrets-f1c5c25c-00d1-4de5-9ec7-4fb0c971677c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044670905s
STEP: Saw pod success
Sep 18 03:19:51.574: INFO: Pod "pod-projected-secrets-f1c5c25c-00d1-4de5-9ec7-4fb0c971677c" satisfied condition "success or failure"
Sep 18 03:19:51.580: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-f1c5c25c-00d1-4de5-9ec7-4fb0c971677c container secret-volume-test: 
STEP: delete the pod
Sep 18 03:19:51.614: INFO: Waiting for pod pod-projected-secrets-f1c5c25c-00d1-4de5-9ec7-4fb0c971677c to disappear
Sep 18 03:19:51.623: INFO: Pod pod-projected-secrets-f1c5c25c-00d1-4de5-9ec7-4fb0c971677c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:19:51.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1956" for this suite.
Sep 18 03:19:57.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:19:57.829: INFO: namespace projected-1956 deletion completed in 6.19719663s

• [SLOW TEST:10.428 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:19:57.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:19:57.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64bcc7b1-1b37-4de8-b9d1-e71214dd960e" in namespace "downward-api-7559" to be "success or failure"
Sep 18 03:19:57.951: INFO: Pod "downwardapi-volume-64bcc7b1-1b37-4de8-b9d1-e71214dd960e": Phase="Pending", Reason="", readiness=false. Elapsed: 43.388414ms
Sep 18 03:20:00.036: INFO: Pod "downwardapi-volume-64bcc7b1-1b37-4de8-b9d1-e71214dd960e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128366977s
Sep 18 03:20:02.044: INFO: Pod "downwardapi-volume-64bcc7b1-1b37-4de8-b9d1-e71214dd960e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136491191s
STEP: Saw pod success
Sep 18 03:20:02.044: INFO: Pod "downwardapi-volume-64bcc7b1-1b37-4de8-b9d1-e71214dd960e" satisfied condition "success or failure"
Sep 18 03:20:02.050: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-64bcc7b1-1b37-4de8-b9d1-e71214dd960e container client-container: 
STEP: delete the pod
Sep 18 03:20:02.121: INFO: Waiting for pod downwardapi-volume-64bcc7b1-1b37-4de8-b9d1-e71214dd960e to disappear
Sep 18 03:20:02.132: INFO: Pod downwardapi-volume-64bcc7b1-1b37-4de8-b9d1-e71214dd960e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:20:02.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7559" for this suite.
Sep 18 03:20:08.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:20:08.303: INFO: namespace downward-api-7559 deletion completed in 6.162462254s

• [SLOW TEST:10.472 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:20:08.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-5c3199a7-7dcd-4f30-a15c-58613c2842e8
STEP: Creating a pod to test consume configMaps
Sep 18 03:20:08.416: INFO: Waiting up to 5m0s for pod "pod-configmaps-f7520393-97c8-4fd4-bdc6-bed15754dda2" in namespace "configmap-4610" to be "success or failure"
Sep 18 03:20:08.440: INFO: Pod "pod-configmaps-f7520393-97c8-4fd4-bdc6-bed15754dda2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.48833ms
Sep 18 03:20:10.447: INFO: Pod "pod-configmaps-f7520393-97c8-4fd4-bdc6-bed15754dda2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031388342s
Sep 18 03:20:12.455: INFO: Pod "pod-configmaps-f7520393-97c8-4fd4-bdc6-bed15754dda2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039063642s
STEP: Saw pod success
Sep 18 03:20:12.455: INFO: Pod "pod-configmaps-f7520393-97c8-4fd4-bdc6-bed15754dda2" satisfied condition "success or failure"
Sep 18 03:20:12.461: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-f7520393-97c8-4fd4-bdc6-bed15754dda2 container configmap-volume-test: 
STEP: delete the pod
Sep 18 03:20:12.478: INFO: Waiting for pod pod-configmaps-f7520393-97c8-4fd4-bdc6-bed15754dda2 to disappear
Sep 18 03:20:12.482: INFO: Pod pod-configmaps-f7520393-97c8-4fd4-bdc6-bed15754dda2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:20:12.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4610" for this suite.
Sep 18 03:20:18.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:20:18.657: INFO: namespace configmap-4610 deletion completed in 6.166333106s

• [SLOW TEST:10.353 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:20:18.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 18 03:20:18.749: INFO: Waiting up to 5m0s for pod "pod-66ad0a95-42b5-4dd0-80a9-39fef7f16c85" in namespace "emptydir-4075" to be "success or failure"
Sep 18 03:20:18.758: INFO: Pod "pod-66ad0a95-42b5-4dd0-80a9-39fef7f16c85": Phase="Pending", Reason="", readiness=false. Elapsed: 9.131004ms
Sep 18 03:20:20.766: INFO: Pod "pod-66ad0a95-42b5-4dd0-80a9-39fef7f16c85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016323494s
Sep 18 03:20:22.772: INFO: Pod "pod-66ad0a95-42b5-4dd0-80a9-39fef7f16c85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022436197s
STEP: Saw pod success
Sep 18 03:20:22.772: INFO: Pod "pod-66ad0a95-42b5-4dd0-80a9-39fef7f16c85" satisfied condition "success or failure"
Sep 18 03:20:22.776: INFO: Trying to get logs from node iruya-worker pod pod-66ad0a95-42b5-4dd0-80a9-39fef7f16c85 container test-container: 
STEP: delete the pod
Sep 18 03:20:22.837: INFO: Waiting for pod pod-66ad0a95-42b5-4dd0-80a9-39fef7f16c85 to disappear
Sep 18 03:20:22.861: INFO: Pod pod-66ad0a95-42b5-4dd0-80a9-39fef7f16c85 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:20:22.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4075" for this suite.
Sep 18 03:20:28.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:20:29.022: INFO: namespace emptydir-4075 deletion completed in 6.151324333s

• [SLOW TEST:10.363 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:20:29.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Sep 18 03:20:29.101: INFO: Waiting up to 5m0s for pod "downward-api-f9a4b13c-dd42-4028-84b1-50b8850869ff" in namespace "downward-api-803" to be "success or failure"
Sep 18 03:20:29.121: INFO: Pod "downward-api-f9a4b13c-dd42-4028-84b1-50b8850869ff": Phase="Pending", Reason="", readiness=false. Elapsed: 20.18649ms
Sep 18 03:20:31.129: INFO: Pod "downward-api-f9a4b13c-dd42-4028-84b1-50b8850869ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028122195s
Sep 18 03:20:33.136: INFO: Pod "downward-api-f9a4b13c-dd42-4028-84b1-50b8850869ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035398028s
STEP: Saw pod success
Sep 18 03:20:33.137: INFO: Pod "downward-api-f9a4b13c-dd42-4028-84b1-50b8850869ff" satisfied condition "success or failure"
Sep 18 03:20:33.142: INFO: Trying to get logs from node iruya-worker2 pod downward-api-f9a4b13c-dd42-4028-84b1-50b8850869ff container dapi-container: 
STEP: delete the pod
Sep 18 03:20:33.211: INFO: Waiting for pod downward-api-f9a4b13c-dd42-4028-84b1-50b8850869ff to disappear
Sep 18 03:20:33.274: INFO: Pod downward-api-f9a4b13c-dd42-4028-84b1-50b8850869ff no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:20:33.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-803" for this suite.
Sep 18 03:20:39.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:20:39.456: INFO: namespace downward-api-803 deletion completed in 6.173246296s

• [SLOW TEST:10.432 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:20:39.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-zsmh
STEP: Creating a pod to test atomic-volume-subpath
Sep 18 03:20:39.586: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zsmh" in namespace "subpath-4023" to be "success or failure"
Sep 18 03:20:39.653: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Pending", Reason="", readiness=false. Elapsed: 66.757757ms
Sep 18 03:20:41.661: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074700384s
Sep 18 03:20:43.667: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Running", Reason="", readiness=true. Elapsed: 4.080435325s
Sep 18 03:20:45.673: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Running", Reason="", readiness=true. Elapsed: 6.087379541s
Sep 18 03:20:47.680: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Running", Reason="", readiness=true. Elapsed: 8.09397675s
Sep 18 03:20:49.687: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Running", Reason="", readiness=true. Elapsed: 10.100799375s
Sep 18 03:20:51.694: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Running", Reason="", readiness=true. Elapsed: 12.107846082s
Sep 18 03:20:53.701: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Running", Reason="", readiness=true. Elapsed: 14.114808087s
Sep 18 03:20:55.707: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Running", Reason="", readiness=true. Elapsed: 16.121162972s
Sep 18 03:20:57.714: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Running", Reason="", readiness=true. Elapsed: 18.128297897s
Sep 18 03:20:59.720: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Running", Reason="", readiness=true. Elapsed: 20.134376776s
Sep 18 03:21:01.728: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Running", Reason="", readiness=true. Elapsed: 22.142304585s
Sep 18 03:21:03.734: INFO: Pod "pod-subpath-test-secret-zsmh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.148110467s
STEP: Saw pod success
Sep 18 03:21:03.735: INFO: Pod "pod-subpath-test-secret-zsmh" satisfied condition "success or failure"
Sep 18 03:21:03.739: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-zsmh container test-container-subpath-secret-zsmh: 
STEP: delete the pod
Sep 18 03:21:03.754: INFO: Waiting for pod pod-subpath-test-secret-zsmh to disappear
Sep 18 03:21:03.778: INFO: Pod pod-subpath-test-secret-zsmh no longer exists
STEP: Deleting pod pod-subpath-test-secret-zsmh
Sep 18 03:21:03.778: INFO: Deleting pod "pod-subpath-test-secret-zsmh" in namespace "subpath-4023"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:21:03.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4023" for this suite.
Sep 18 03:21:09.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:21:09.967: INFO: namespace subpath-4023 deletion completed in 6.177298538s

• [SLOW TEST:30.509 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:21:09.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 18 03:21:10.115: INFO: Waiting up to 5m0s for pod "pod-03aa3da1-15d5-4a39-a0f9-023be47e843a" in namespace "emptydir-9785" to be "success or failure"
Sep 18 03:21:10.131: INFO: Pod "pod-03aa3da1-15d5-4a39-a0f9-023be47e843a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.039804ms
Sep 18 03:21:12.162: INFO: Pod "pod-03aa3da1-15d5-4a39-a0f9-023be47e843a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046516869s
Sep 18 03:21:14.169: INFO: Pod "pod-03aa3da1-15d5-4a39-a0f9-023be47e843a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053345045s
STEP: Saw pod success
Sep 18 03:21:14.169: INFO: Pod "pod-03aa3da1-15d5-4a39-a0f9-023be47e843a" satisfied condition "success or failure"
Sep 18 03:21:14.174: INFO: Trying to get logs from node iruya-worker2 pod pod-03aa3da1-15d5-4a39-a0f9-023be47e843a container test-container: 
STEP: delete the pod
Sep 18 03:21:14.308: INFO: Waiting for pod pod-03aa3da1-15d5-4a39-a0f9-023be47e843a to disappear
Sep 18 03:21:14.316: INFO: Pod pod-03aa3da1-15d5-4a39-a0f9-023be47e843a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:21:14.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9785" for this suite.
Sep 18 03:21:20.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:21:20.485: INFO: namespace emptydir-9785 deletion completed in 6.162104846s

• [SLOW TEST:10.516 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:21:20.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Sep 18 03:21:20.590: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-418,SelfLink:/api/v1/namespaces/watch-418/configmaps/e2e-watch-test-resource-version,UID:700d2a1f-80a3-4a01-bb70-54c32f0c48f9,ResourceVersion:796853,Generation:0,CreationTimestamp:2020-09-18 03:21:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 18 03:21:20.591: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-418,SelfLink:/api/v1/namespaces/watch-418/configmaps/e2e-watch-test-resource-version,UID:700d2a1f-80a3-4a01-bb70-54c32f0c48f9,ResourceVersion:796854,Generation:0,CreationTimestamp:2020-09-18 03:21:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:21:20.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-418" for this suite.
Sep 18 03:21:26.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:21:26.755: INFO: namespace watch-418 deletion completed in 6.155362455s

• [SLOW TEST:6.267 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:21:26.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:21:26.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2968'
Sep 18 03:21:28.328: INFO: stderr: ""
Sep 18 03:21:28.329: INFO: stdout: "replicationcontroller/redis-master created\n"
Sep 18 03:21:28.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2968'
Sep 18 03:21:31.054: INFO: stderr: ""
Sep 18 03:21:31.054: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep 18 03:21:32.063: INFO: Selector matched 1 pods for map[app:redis]
Sep 18 03:21:32.064: INFO: Found 1 / 1
Sep 18 03:21:32.064: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep 18 03:21:32.070: INFO: Selector matched 1 pods for map[app:redis]
Sep 18 03:21:32.071: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 18 03:21:32.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-pgr7l --namespace=kubectl-2968'
Sep 18 03:21:35.857: INFO: stderr: ""
Sep 18 03:21:35.857: INFO: stdout: "Name:           redis-master-pgr7l\nNamespace:      kubectl-2968\nPriority:       0\nNode:           iruya-worker/172.18.0.6\nStart Time:     Fri, 18 Sep 2020 03:21:28 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.2.85\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://10bcc4f0bbaf628badf962dcf50a5f9e8bcb41fe9f3ec7d1070dc16cecdd5965\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 18 Sep 2020 03:21:31 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tndqn (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-tndqn:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-tndqn\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  7s    default-scheduler      Successfully assigned kubectl-2968/redis-master-pgr7l to iruya-worker\n  Normal  Pulled     6s    kubelet, iruya-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    4s    kubelet, iruya-worker  Created container redis-master\n  Normal  Started    4s    kubelet, iruya-worker  Started container redis-master\n"
Sep 18 03:21:35.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2968'
Sep 18 03:21:37.062: INFO: stderr: ""
Sep 18 03:21:37.062: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2968\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-pgr7l\n"
Sep 18 03:21:37.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2968'
Sep 18 03:21:38.238: INFO: stderr: ""
Sep 18 03:21:38.238: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2968\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.96.157.44\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.2.85:6379\nSession Affinity:  None\nEvents:            \n"
Sep 18 03:21:38.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Sep 18 03:21:39.543: INFO: stderr: ""
Sep 18 03:21:39.543: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 13 Sep 2020 16:50:28 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 18 Sep 2020 03:21:29 +0000   Sun, 13 Sep 2020 16:50:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 18 Sep 2020 03:21:29 +0000   Sun, 13 Sep 2020 16:50:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 18 Sep 2020 03:21:29 +0000   Sun, 13 Sep 2020 16:50:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 18 Sep 2020 03:21:29 +0000   Sun, 13 Sep 2020 16:50:58 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.5\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nSystem Info:\n Machine ID:                 839e6de832314e7e9fb7ad9291f1bb5d\n System UUID:                b2393707-00b1-4de1-b2b8-b4f8e5f4aba4\n Boot ID:                    6cae8cc9-70fd-486a-9495-a1a7da130c42\n Kernel Version:             4.15.0-115-generic\n OS Image:                   Ubuntu 19.10\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.3.3-14-g449e9269\n Kubelet Version:            v1.15.11\n Kube-Proxy Version:         v1.15.11\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-dk9pd                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     4d10h\n  kube-system                coredns-5d4dd4b4db-z4j46                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     4d10h\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d10h\n  kube-system                kindnet-2hdwx                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      4d10h\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         4d10h\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         4d10h\n  kube-system                kube-proxy-sj928                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d10h\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         4d10h\n  local-path-storage         local-path-provisioner-668779bd7-vbnhd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d10h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Sep 18 03:21:39.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2968'
Sep 18 03:21:40.720: INFO: stderr: ""
Sep 18 03:21:40.720: INFO: stdout: "Name:         kubectl-2968\nLabels:       e2e-framework=kubectl\n              e2e-run=a387e887-f057-4e43-b89b-439f6701652b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:21:40.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2968" for this suite.
Sep 18 03:22:04.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:22:04.893: INFO: namespace kubectl-2968 deletion completed in 24.162312998s

• [SLOW TEST:38.137 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:22:04.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-j7bj
STEP: Creating a pod to test atomic-volume-subpath
Sep 18 03:22:05.010: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-j7bj" in namespace "subpath-7138" to be "success or failure"
Sep 18 03:22:05.053: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Pending", Reason="", readiness=false. Elapsed: 42.535408ms
Sep 18 03:22:07.115: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104299355s
Sep 18 03:22:09.122: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Running", Reason="", readiness=true. Elapsed: 4.111635758s
Sep 18 03:22:11.130: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Running", Reason="", readiness=true. Elapsed: 6.119313377s
Sep 18 03:22:13.137: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Running", Reason="", readiness=true. Elapsed: 8.126947462s
Sep 18 03:22:15.145: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Running", Reason="", readiness=true. Elapsed: 10.134857792s
Sep 18 03:22:17.152: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Running", Reason="", readiness=true. Elapsed: 12.141952202s
Sep 18 03:22:19.159: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Running", Reason="", readiness=true. Elapsed: 14.148971522s
Sep 18 03:22:21.167: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Running", Reason="", readiness=true. Elapsed: 16.1560706s
Sep 18 03:22:23.174: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Running", Reason="", readiness=true. Elapsed: 18.163766395s
Sep 18 03:22:25.182: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Running", Reason="", readiness=true. Elapsed: 20.171346711s
Sep 18 03:22:27.188: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Running", Reason="", readiness=true. Elapsed: 22.177553332s
Sep 18 03:22:29.195: INFO: Pod "pod-subpath-test-configmap-j7bj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.184545418s
STEP: Saw pod success
Sep 18 03:22:29.195: INFO: Pod "pod-subpath-test-configmap-j7bj" satisfied condition "success or failure"
Sep 18 03:22:29.201: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-j7bj container test-container-subpath-configmap-j7bj: 
STEP: delete the pod
Sep 18 03:22:29.236: INFO: Waiting for pod pod-subpath-test-configmap-j7bj to disappear
Sep 18 03:22:29.243: INFO: Pod pod-subpath-test-configmap-j7bj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-j7bj
Sep 18 03:22:29.244: INFO: Deleting pod "pod-subpath-test-configmap-j7bj" in namespace "subpath-7138"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:22:29.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7138" for this suite.
Sep 18 03:22:35.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:22:35.458: INFO: namespace subpath-7138 deletion completed in 6.204956583s

• [SLOW TEST:30.561 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:22:35.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep 18 03:22:40.069: INFO: Successfully updated pod "pod-update-309eb8ea-2c51-4f7d-810d-0c642db756c2"
STEP: verifying the updated pod is in kubernetes
Sep 18 03:22:40.120: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:22:40.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3253" for this suite.
Sep 18 03:23:02.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:23:02.336: INFO: namespace pods-3253 deletion completed in 22.206749769s

• [SLOW TEST:26.878 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:23:02.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0918 03:23:12.463891       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 18 03:23:12.464: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:23:12.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5059" for this suite.
Sep 18 03:23:18.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:23:18.629: INFO: namespace gc-5059 deletion completed in 6.156053429s

• [SLOW TEST:16.291 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:23:18.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-a243fc0a-7da4-4b49-afb6-3145daf28875
STEP: Creating secret with name s-test-opt-upd-e6f17c3a-82f8-49b6-b724-37a19d14fb97
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-a243fc0a-7da4-4b49-afb6-3145daf28875
STEP: Updating secret s-test-opt-upd-e6f17c3a-82f8-49b6-b724-37a19d14fb97
STEP: Creating secret with name s-test-opt-create-e15f57d8-eeb4-40df-b0be-ac066705089f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:23:28.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1643" for this suite.
Sep 18 03:23:50.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:23:51.099: INFO: namespace projected-1643 deletion completed in 22.165681114s

• [SLOW TEST:32.470 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:23:51.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4319
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep 18 03:23:51.178: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep 18 03:24:13.316: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.48:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4319 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:24:13.317: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:24:13.419394       7 log.go:172] (0x91aa000) (0x91aa1c0) Create stream
I0918 03:24:13.419612       7 log.go:172] (0x91aa000) (0x91aa1c0) Stream added, broadcasting: 1
I0918 03:24:13.423935       7 log.go:172] (0x91aa000) Reply frame received for 1
I0918 03:24:13.424302       7 log.go:172] (0x91aa000) (0x92089a0) Create stream
I0918 03:24:13.424457       7 log.go:172] (0x91aa000) (0x92089a0) Stream added, broadcasting: 3
I0918 03:24:13.426325       7 log.go:172] (0x91aa000) Reply frame received for 3
I0918 03:24:13.426553       7 log.go:172] (0x91aa000) (0x94d40e0) Create stream
I0918 03:24:13.426675       7 log.go:172] (0x91aa000) (0x94d40e0) Stream added, broadcasting: 5
I0918 03:24:13.428289       7 log.go:172] (0x91aa000) Reply frame received for 5
I0918 03:24:13.530533       7 log.go:172] (0x91aa000) Data frame received for 5
I0918 03:24:13.530738       7 log.go:172] (0x94d40e0) (5) Data frame handling
I0918 03:24:13.530873       7 log.go:172] (0x91aa000) Data frame received for 3
I0918 03:24:13.530999       7 log.go:172] (0x92089a0) (3) Data frame handling
I0918 03:24:13.531143       7 log.go:172] (0x92089a0) (3) Data frame sent
I0918 03:24:13.531244       7 log.go:172] (0x91aa000) Data frame received for 3
I0918 03:24:13.531347       7 log.go:172] (0x92089a0) (3) Data frame handling
I0918 03:24:13.532822       7 log.go:172] (0x91aa000) Data frame received for 1
I0918 03:24:13.532919       7 log.go:172] (0x91aa1c0) (1) Data frame handling
I0918 03:24:13.533019       7 log.go:172] (0x91aa1c0) (1) Data frame sent
I0918 03:24:13.533128       7 log.go:172] (0x91aa000) (0x91aa1c0) Stream removed, broadcasting: 1
I0918 03:24:13.533279       7 log.go:172] (0x91aa000) Go away received
I0918 03:24:13.533674       7 log.go:172] (0x91aa000) (0x91aa1c0) Stream removed, broadcasting: 1
I0918 03:24:13.533861       7 log.go:172] (0x91aa000) (0x92089a0) Stream removed, broadcasting: 3
I0918 03:24:13.533976       7 log.go:172] (0x91aa000) (0x94d40e0) Stream removed, broadcasting: 5
Sep 18 03:24:13.534: INFO: Found all expected endpoints: [netserver-0]
Sep 18 03:24:13.539: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.88:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4319 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:24:13.539: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:24:13.631142       7 log.go:172] (0x77efe30) (0x91b2000) Create stream
I0918 03:24:13.631355       7 log.go:172] (0x77efe30) (0x91b2000) Stream added, broadcasting: 1
I0918 03:24:13.635358       7 log.go:172] (0x77efe30) Reply frame received for 1
I0918 03:24:13.635539       7 log.go:172] (0x77efe30) (0x91b20e0) Create stream
I0918 03:24:13.635622       7 log.go:172] (0x77efe30) (0x91b20e0) Stream added, broadcasting: 3
I0918 03:24:13.636964       7 log.go:172] (0x77efe30) Reply frame received for 3
I0918 03:24:13.637130       7 log.go:172] (0x77efe30) (0x91aa380) Create stream
I0918 03:24:13.637206       7 log.go:172] (0x77efe30) (0x91aa380) Stream added, broadcasting: 5
I0918 03:24:13.638368       7 log.go:172] (0x77efe30) Reply frame received for 5
I0918 03:24:13.702129       7 log.go:172] (0x77efe30) Data frame received for 3
I0918 03:24:13.702364       7 log.go:172] (0x77efe30) Data frame received for 5
I0918 03:24:13.702544       7 log.go:172] (0x91aa380) (5) Data frame handling
I0918 03:24:13.702674       7 log.go:172] (0x91b20e0) (3) Data frame handling
I0918 03:24:13.702812       7 log.go:172] (0x91b20e0) (3) Data frame sent
I0918 03:24:13.702904       7 log.go:172] (0x77efe30) Data frame received for 3
I0918 03:24:13.702983       7 log.go:172] (0x91b20e0) (3) Data frame handling
I0918 03:24:13.704354       7 log.go:172] (0x77efe30) Data frame received for 1
I0918 03:24:13.704453       7 log.go:172] (0x91b2000) (1) Data frame handling
I0918 03:24:13.704560       7 log.go:172] (0x91b2000) (1) Data frame sent
I0918 03:24:13.704675       7 log.go:172] (0x77efe30) (0x91b2000) Stream removed, broadcasting: 1
I0918 03:24:13.704804       7 log.go:172] (0x77efe30) Go away received
I0918 03:24:13.705232       7 log.go:172] (0x77efe30) (0x91b2000) Stream removed, broadcasting: 1
I0918 03:24:13.705393       7 log.go:172] (0x77efe30) (0x91b20e0) Stream removed, broadcasting: 3
I0918 03:24:13.705477       7 log.go:172] (0x77efe30) (0x91aa380) Stream removed, broadcasting: 5
Sep 18 03:24:13.705: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:24:13.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4319" for this suite.
Sep 18 03:24:35.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:24:35.919: INFO: namespace pod-network-test-4319 deletion completed in 22.205630694s

• [SLOW TEST:44.818 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:24:35.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:24:36.033: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24cee6f4-e3f7-41ff-9f1b-3b6eac451b32" in namespace "projected-2746" to be "success or failure"
Sep 18 03:24:36.038: INFO: Pod "downwardapi-volume-24cee6f4-e3f7-41ff-9f1b-3b6eac451b32": Phase="Pending", Reason="", readiness=false. Elapsed: 5.319367ms
Sep 18 03:24:38.068: INFO: Pod "downwardapi-volume-24cee6f4-e3f7-41ff-9f1b-3b6eac451b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035424482s
Sep 18 03:24:40.075: INFO: Pod "downwardapi-volume-24cee6f4-e3f7-41ff-9f1b-3b6eac451b32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042257841s
STEP: Saw pod success
Sep 18 03:24:40.076: INFO: Pod "downwardapi-volume-24cee6f4-e3f7-41ff-9f1b-3b6eac451b32" satisfied condition "success or failure"
Sep 18 03:24:40.080: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-24cee6f4-e3f7-41ff-9f1b-3b6eac451b32 container client-container: 
STEP: delete the pod
Sep 18 03:24:40.105: INFO: Waiting for pod downwardapi-volume-24cee6f4-e3f7-41ff-9f1b-3b6eac451b32 to disappear
Sep 18 03:24:40.110: INFO: Pod downwardapi-volume-24cee6f4-e3f7-41ff-9f1b-3b6eac451b32 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:24:40.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2746" for this suite.
Sep 18 03:24:46.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:24:46.293: INFO: namespace projected-2746 deletion completed in 6.175105583s

• [SLOW TEST:10.373 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:24:46.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-4bf16c7e-f84d-4beb-ba84-a6e4f042e378
STEP: Creating a pod to test consume configMaps
Sep 18 03:24:46.413: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-800e175a-a0b6-4115-b504-4068e1038161" in namespace "projected-2672" to be "success or failure"
Sep 18 03:24:46.423: INFO: Pod "pod-projected-configmaps-800e175a-a0b6-4115-b504-4068e1038161": Phase="Pending", Reason="", readiness=false. Elapsed: 9.254478ms
Sep 18 03:24:48.430: INFO: Pod "pod-projected-configmaps-800e175a-a0b6-4115-b504-4068e1038161": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016557906s
Sep 18 03:24:50.436: INFO: Pod "pod-projected-configmaps-800e175a-a0b6-4115-b504-4068e1038161": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022734897s
STEP: Saw pod success
Sep 18 03:24:50.436: INFO: Pod "pod-projected-configmaps-800e175a-a0b6-4115-b504-4068e1038161" satisfied condition "success or failure"
Sep 18 03:24:50.441: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-800e175a-a0b6-4115-b504-4068e1038161 container projected-configmap-volume-test: 
STEP: delete the pod
Sep 18 03:24:50.496: INFO: Waiting for pod pod-projected-configmaps-800e175a-a0b6-4115-b504-4068e1038161 to disappear
Sep 18 03:24:50.525: INFO: Pod pod-projected-configmaps-800e175a-a0b6-4115-b504-4068e1038161 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:24:50.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2672" for this suite.
Sep 18 03:24:56.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:24:56.733: INFO: namespace projected-2672 deletion completed in 6.19836564s

• [SLOW TEST:10.440 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:24:56.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-bv67
STEP: Creating a pod to test atomic-volume-subpath
Sep 18 03:24:56.837: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bv67" in namespace "subpath-9085" to be "success or failure"
Sep 18 03:24:56.974: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Pending", Reason="", readiness=false. Elapsed: 136.32123ms
Sep 18 03:24:58.981: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143370357s
Sep 18 03:25:00.987: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 4.149635716s
Sep 18 03:25:02.994: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 6.157214367s
Sep 18 03:25:05.002: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 8.164735787s
Sep 18 03:25:07.010: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 10.172651663s
Sep 18 03:25:09.017: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 12.180223668s
Sep 18 03:25:11.025: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 14.187955902s
Sep 18 03:25:13.032: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 16.19430256s
Sep 18 03:25:15.039: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 18.201565101s
Sep 18 03:25:17.046: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 20.209030701s
Sep 18 03:25:19.055: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 22.217460224s
Sep 18 03:25:21.062: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Running", Reason="", readiness=true. Elapsed: 24.224576624s
Sep 18 03:25:23.069: INFO: Pod "pod-subpath-test-configmap-bv67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.231478602s
STEP: Saw pod success
Sep 18 03:25:23.069: INFO: Pod "pod-subpath-test-configmap-bv67" satisfied condition "success or failure"
Sep 18 03:25:23.075: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-bv67 container test-container-subpath-configmap-bv67: 
STEP: delete the pod
Sep 18 03:25:23.136: INFO: Waiting for pod pod-subpath-test-configmap-bv67 to disappear
Sep 18 03:25:23.165: INFO: Pod pod-subpath-test-configmap-bv67 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-bv67
Sep 18 03:25:23.165: INFO: Deleting pod "pod-subpath-test-configmap-bv67" in namespace "subpath-9085"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:25:23.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9085" for this suite.
Sep 18 03:25:29.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:25:29.335: INFO: namespace subpath-9085 deletion completed in 6.157381215s

• [SLOW TEST:32.601 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:25:29.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Sep 18 03:25:33.989: INFO: Successfully updated pod "labelsupdate7d759136-4e79-4df1-93f7-e350c8eb7e82"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:25:36.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8004" for this suite.
Sep 18 03:25:54.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:25:54.210: INFO: namespace downward-api-8004 deletion completed in 18.174885893s

• [SLOW TEST:24.870 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:25:54.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Sep 18 03:25:59.047: INFO: Successfully updated pod "annotationupdate21dc8461-37cb-4b70-ab6d-7cd543170243"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:26:01.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2404" for this suite.
Sep 18 03:26:25.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:26:25.321: INFO: namespace projected-2404 deletion completed in 24.230534698s

• [SLOW TEST:31.110 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:26:25.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 18 03:26:25.431: INFO: Waiting up to 5m0s for pod "pod-3b367c9d-ebfd-42de-aa73-60ed57ec8aaf" in namespace "emptydir-2234" to be "success or failure"
Sep 18 03:26:25.456: INFO: Pod "pod-3b367c9d-ebfd-42de-aa73-60ed57ec8aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.092105ms
Sep 18 03:26:27.507: INFO: Pod "pod-3b367c9d-ebfd-42de-aa73-60ed57ec8aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075917259s
Sep 18 03:26:29.515: INFO: Pod "pod-3b367c9d-ebfd-42de-aa73-60ed57ec8aaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083951388s
STEP: Saw pod success
Sep 18 03:26:29.516: INFO: Pod "pod-3b367c9d-ebfd-42de-aa73-60ed57ec8aaf" satisfied condition "success or failure"
Sep 18 03:26:29.688: INFO: Trying to get logs from node iruya-worker2 pod pod-3b367c9d-ebfd-42de-aa73-60ed57ec8aaf container test-container: 
STEP: delete the pod
Sep 18 03:26:29.755: INFO: Waiting for pod pod-3b367c9d-ebfd-42de-aa73-60ed57ec8aaf to disappear
Sep 18 03:26:29.784: INFO: Pod pod-3b367c9d-ebfd-42de-aa73-60ed57ec8aaf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:26:29.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2234" for this suite.
Sep 18 03:26:35.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:26:35.965: INFO: namespace emptydir-2234 deletion completed in 6.150020323s

• [SLOW TEST:10.642 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:26:35.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-c14b74b3-b0be-43c5-a594-3baf2fe74b2b
STEP: Creating secret with name s-test-opt-upd-4d6f01d1-2a78-47c0-9a37-1414e0942802
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c14b74b3-b0be-43c5-a594-3baf2fe74b2b
STEP: Updating secret s-test-opt-upd-4d6f01d1-2a78-47c0-9a37-1414e0942802
STEP: Creating secret with name s-test-opt-create-96105dca-3226-43c1-b6a8-ad0fd6bf87ba
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:27:46.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5585" for this suite.
Sep 18 03:28:08.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:28:08.748: INFO: namespace secrets-5585 deletion completed in 22.162968117s

• [SLOW TEST:92.782 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:28:08.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:28:12.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7040" for this suite.
Sep 18 03:28:50.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:28:51.025: INFO: namespace kubelet-test-7040 deletion completed in 38.149942762s

• [SLOW TEST:42.275 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:28:51.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 18 03:28:55.166: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:28:55.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2440" for this suite.
Sep 18 03:29:01.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:29:01.345: INFO: namespace container-runtime-2440 deletion completed in 6.155266025s

• [SLOW TEST:10.318 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:29:01.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-6cfb882f-4fce-4df8-b796-bb965c8aef53
STEP: Creating a pod to test consume secrets
Sep 18 03:29:01.462: INFO: Waiting up to 5m0s for pod "pod-secrets-f1a6b417-3d6f-42cf-97dd-d5937d2471e3" in namespace "secrets-8405" to be "success or failure"
Sep 18 03:29:01.469: INFO: Pod "pod-secrets-f1a6b417-3d6f-42cf-97dd-d5937d2471e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.703343ms
Sep 18 03:29:03.476: INFO: Pod "pod-secrets-f1a6b417-3d6f-42cf-97dd-d5937d2471e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013764457s
Sep 18 03:29:05.482: INFO: Pod "pod-secrets-f1a6b417-3d6f-42cf-97dd-d5937d2471e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020358318s
STEP: Saw pod success
Sep 18 03:29:05.483: INFO: Pod "pod-secrets-f1a6b417-3d6f-42cf-97dd-d5937d2471e3" satisfied condition "success or failure"
Sep 18 03:29:05.486: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f1a6b417-3d6f-42cf-97dd-d5937d2471e3 container secret-volume-test: 
STEP: delete the pod
Sep 18 03:29:05.524: INFO: Waiting for pod pod-secrets-f1a6b417-3d6f-42cf-97dd-d5937d2471e3 to disappear
Sep 18 03:29:05.540: INFO: Pod pod-secrets-f1a6b417-3d6f-42cf-97dd-d5937d2471e3 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:29:05.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8405" for this suite.
Sep 18 03:29:11.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:29:11.704: INFO: namespace secrets-8405 deletion completed in 6.154746614s

• [SLOW TEST:10.357 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:29:11.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 18 03:29:11.784: INFO: Waiting up to 5m0s for pod "pod-3462da41-0320-4029-8739-dce6be8c944c" in namespace "emptydir-3028" to be "success or failure"
Sep 18 03:29:11.793: INFO: Pod "pod-3462da41-0320-4029-8739-dce6be8c944c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.128588ms
Sep 18 03:29:13.801: INFO: Pod "pod-3462da41-0320-4029-8739-dce6be8c944c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016671829s
Sep 18 03:29:15.808: INFO: Pod "pod-3462da41-0320-4029-8739-dce6be8c944c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02352191s
STEP: Saw pod success
Sep 18 03:29:15.808: INFO: Pod "pod-3462da41-0320-4029-8739-dce6be8c944c" satisfied condition "success or failure"
Sep 18 03:29:15.813: INFO: Trying to get logs from node iruya-worker pod pod-3462da41-0320-4029-8739-dce6be8c944c container test-container: 
STEP: delete the pod
Sep 18 03:29:15.876: INFO: Waiting for pod pod-3462da41-0320-4029-8739-dce6be8c944c to disappear
Sep 18 03:29:15.882: INFO: Pod pod-3462da41-0320-4029-8739-dce6be8c944c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:29:15.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3028" for this suite.
Sep 18 03:29:21.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:29:22.085: INFO: namespace emptydir-3028 deletion completed in 6.194124336s

• [SLOW TEST:10.381 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:29:22.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:29:22.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:29:26.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7602" for this suite.
Sep 18 03:30:06.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:30:06.665: INFO: namespace pods-7602 deletion completed in 40.197103361s

• [SLOW TEST:44.578 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:30:06.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0918 03:30:46.903959       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 18 03:30:46.904: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:30:46.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-490" for this suite.
Sep 18 03:30:54.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:30:55.067: INFO: namespace gc-490 deletion completed in 8.156373418s

• [SLOW TEST:48.400 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:30:55.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:31:03.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1298" for this suite.
Sep 18 03:31:10.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:31:10.239: INFO: namespace kubelet-test-1298 deletion completed in 6.459408804s

• [SLOW TEST:15.170 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:31:10.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:31:10.326: INFO: Creating ReplicaSet my-hostname-basic-962ecddf-1ff7-4fc9-ae05-9e51c8262d53
Sep 18 03:31:10.355: INFO: Pod name my-hostname-basic-962ecddf-1ff7-4fc9-ae05-9e51c8262d53: Found 0 pods out of 1
Sep 18 03:31:15.362: INFO: Pod name my-hostname-basic-962ecddf-1ff7-4fc9-ae05-9e51c8262d53: Found 1 pods out of 1
Sep 18 03:31:15.363: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-962ecddf-1ff7-4fc9-ae05-9e51c8262d53" is running
Sep 18 03:31:15.368: INFO: Pod "my-hostname-basic-962ecddf-1ff7-4fc9-ae05-9e51c8262d53-r54q2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-18 03:31:10 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-18 03:31:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-18 03:31:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-18 03:31:10 +0000 UTC Reason: Message:}])
Sep 18 03:31:15.369: INFO: Trying to dial the pod
Sep 18 03:31:20.386: INFO: Controller my-hostname-basic-962ecddf-1ff7-4fc9-ae05-9e51c8262d53: Got expected result from replica 1 [my-hostname-basic-962ecddf-1ff7-4fc9-ae05-9e51c8262d53-r54q2]: "my-hostname-basic-962ecddf-1ff7-4fc9-ae05-9e51c8262d53-r54q2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:31:20.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8647" for this suite.
Sep 18 03:31:26.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:31:26.573: INFO: namespace replicaset-8647 deletion completed in 6.16233732s

• [SLOW TEST:16.333 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:31:26.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Sep 18 03:31:26.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8294'
Sep 18 03:31:28.343: INFO: stderr: ""
Sep 18 03:31:28.343: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep 18 03:31:29.353: INFO: Selector matched 1 pods for map[app:redis]
Sep 18 03:31:29.353: INFO: Found 0 / 1
Sep 18 03:31:30.351: INFO: Selector matched 1 pods for map[app:redis]
Sep 18 03:31:30.351: INFO: Found 0 / 1
Sep 18 03:31:31.351: INFO: Selector matched 1 pods for map[app:redis]
Sep 18 03:31:31.351: INFO: Found 0 / 1
Sep 18 03:31:32.351: INFO: Selector matched 1 pods for map[app:redis]
Sep 18 03:31:32.351: INFO: Found 0 / 1
Sep 18 03:31:33.351: INFO: Selector matched 1 pods for map[app:redis]
Sep 18 03:31:33.351: INFO: Found 1 / 1
Sep 18 03:31:33.351: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Sep 18 03:31:33.358: INFO: Selector matched 1 pods for map[app:redis]
Sep 18 03:31:33.358: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 18 03:31:33.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-t298t --namespace=kubectl-8294 -p {"metadata":{"annotations":{"x":"y"}}}'
Sep 18 03:31:37.079: INFO: stderr: ""
Sep 18 03:31:37.079: INFO: stdout: "pod/redis-master-t298t patched\n"
STEP: checking annotations
Sep 18 03:31:37.086: INFO: Selector matched 1 pods for map[app:redis]
Sep 18 03:31:37.086: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:31:37.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8294" for this suite.
Sep 18 03:31:59.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:31:59.258: INFO: namespace kubectl-8294 deletion completed in 22.162248043s

• [SLOW TEST:32.680 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:31:59.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-199
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep 18 03:31:59.320: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep 18 03:32:25.470: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.64:8080/dial?request=hostName&protocol=http&host=10.244.2.101&port=8080&tries=1'] Namespace:pod-network-test-199 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:32:25.470: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:32:25.572245       7 log.go:172] (0x91b2af0) (0x91b2b60) Create stream
I0918 03:32:25.572430       7 log.go:172] (0x91b2af0) (0x91b2b60) Stream added, broadcasting: 1
I0918 03:32:25.576114       7 log.go:172] (0x91b2af0) Reply frame received for 1
I0918 03:32:25.576343       7 log.go:172] (0x91b2af0) (0x91b2bd0) Create stream
I0918 03:32:25.576430       7 log.go:172] (0x91b2af0) (0x91b2bd0) Stream added, broadcasting: 3
I0918 03:32:25.577753       7 log.go:172] (0x91b2af0) Reply frame received for 3
I0918 03:32:25.577894       7 log.go:172] (0x91b2af0) (0x9522e00) Create stream
I0918 03:32:25.577978       7 log.go:172] (0x91b2af0) (0x9522e00) Stream added, broadcasting: 5
I0918 03:32:25.579436       7 log.go:172] (0x91b2af0) Reply frame received for 5
I0918 03:32:25.663241       7 log.go:172] (0x91b2af0) Data frame received for 3
I0918 03:32:25.663550       7 log.go:172] (0x91b2bd0) (3) Data frame handling
I0918 03:32:25.663796       7 log.go:172] (0x91b2af0) Data frame received for 5
I0918 03:32:25.664049       7 log.go:172] (0x9522e00) (5) Data frame handling
I0918 03:32:25.664401       7 log.go:172] (0x91b2bd0) (3) Data frame sent
I0918 03:32:25.664583       7 log.go:172] (0x91b2af0) Data frame received for 3
I0918 03:32:25.664694       7 log.go:172] (0x91b2bd0) (3) Data frame handling
I0918 03:32:25.665259       7 log.go:172] (0x91b2af0) Data frame received for 1
I0918 03:32:25.665400       7 log.go:172] (0x91b2b60) (1) Data frame handling
I0918 03:32:25.665594       7 log.go:172] (0x91b2b60) (1) Data frame sent
I0918 03:32:25.665799       7 log.go:172] (0x91b2af0) (0x91b2b60) Stream removed, broadcasting: 1
I0918 03:32:25.666027       7 log.go:172] (0x91b2af0) Go away received
I0918 03:32:25.666611       7 log.go:172] (0x91b2af0) (0x91b2b60) Stream removed, broadcasting: 1
I0918 03:32:25.666807       7 log.go:172] (0x91b2af0) (0x91b2bd0) Stream removed, broadcasting: 3
I0918 03:32:25.666941       7 log.go:172] (0x91b2af0) (0x9522e00) Stream removed, broadcasting: 5
Sep 18 03:32:25.667: INFO: Waiting for endpoints: map[]
Sep 18 03:32:25.673: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.64:8080/dial?request=hostName&protocol=http&host=10.244.1.63&port=8080&tries=1'] Namespace:pod-network-test-199 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 18 03:32:25.673: INFO: >>> kubeConfig: /root/.kube/config
I0918 03:32:25.776357       7 log.go:172] (0x8f247e0) (0x8f248c0) Create stream
I0918 03:32:25.776532       7 log.go:172] (0x8f247e0) (0x8f248c0) Stream added, broadcasting: 1
I0918 03:32:25.781686       7 log.go:172] (0x8f247e0) Reply frame received for 1
I0918 03:32:25.781980       7 log.go:172] (0x8f247e0) (0x7e86000) Create stream
I0918 03:32:25.782120       7 log.go:172] (0x8f247e0) (0x7e86000) Stream added, broadcasting: 3
I0918 03:32:25.784050       7 log.go:172] (0x8f247e0) Reply frame received for 3
I0918 03:32:25.784286       7 log.go:172] (0x8f247e0) (0x7e86070) Create stream
I0918 03:32:25.784394       7 log.go:172] (0x8f247e0) (0x7e86070) Stream added, broadcasting: 5
I0918 03:32:25.786382       7 log.go:172] (0x8f247e0) Reply frame received for 5
I0918 03:32:25.850953       7 log.go:172] (0x8f247e0) Data frame received for 3
I0918 03:32:25.851204       7 log.go:172] (0x7e86000) (3) Data frame handling
I0918 03:32:25.851391       7 log.go:172] (0x8f247e0) Data frame received for 5
I0918 03:32:25.851624       7 log.go:172] (0x7e86070) (5) Data frame handling
I0918 03:32:25.851778       7 log.go:172] (0x7e86000) (3) Data frame sent
I0918 03:32:25.851957       7 log.go:172] (0x8f247e0) Data frame received for 3
I0918 03:32:25.852107       7 log.go:172] (0x7e86000) (3) Data frame handling
I0918 03:32:25.852508       7 log.go:172] (0x8f247e0) Data frame received for 1
I0918 03:32:25.852611       7 log.go:172] (0x8f248c0) (1) Data frame handling
I0918 03:32:25.852725       7 log.go:172] (0x8f248c0) (1) Data frame sent
I0918 03:32:25.852846       7 log.go:172] (0x8f247e0) (0x8f248c0) Stream removed, broadcasting: 1
I0918 03:32:25.853096       7 log.go:172] (0x8f247e0) Go away received
I0918 03:32:25.853388       7 log.go:172] (0x8f247e0) (0x8f248c0) Stream removed, broadcasting: 1
I0918 03:32:25.853587       7 log.go:172] (0x8f247e0) (0x7e86000) Stream removed, broadcasting: 3
I0918 03:32:25.853780       7 log.go:172] (0x8f247e0) (0x7e86070) Stream removed, broadcasting: 5
Sep 18 03:32:25.854: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:32:25.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-199" for this suite.
Sep 18 03:32:49.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:32:50.029: INFO: namespace pod-network-test-199 deletion completed in 24.162744757s

• [SLOW TEST:50.768 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:32:50.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Sep 18 03:32:50.082: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Sep 18 03:32:50.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9611'
Sep 18 03:32:51.657: INFO: stderr: ""
Sep 18 03:32:51.657: INFO: stdout: "service/redis-slave created\n"
Sep 18 03:32:51.658: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Sep 18 03:32:51.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9611'
Sep 18 03:32:53.232: INFO: stderr: ""
Sep 18 03:32:53.232: INFO: stdout: "service/redis-master created\n"
Sep 18 03:32:53.234: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Sep 18 03:32:53.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9611'
Sep 18 03:32:54.821: INFO: stderr: ""
Sep 18 03:32:54.822: INFO: stdout: "service/frontend created\n"
Sep 18 03:32:54.824: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Sep 18 03:32:54.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9611'
Sep 18 03:32:56.348: INFO: stderr: ""
Sep 18 03:32:56.349: INFO: stdout: "deployment.apps/frontend created\n"
Sep 18 03:32:56.350: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Sep 18 03:32:56.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9611'
Sep 18 03:32:57.907: INFO: stderr: ""
Sep 18 03:32:57.907: INFO: stdout: "deployment.apps/redis-master created\n"
Sep 18 03:32:57.909: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Sep 18 03:32:57.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9611'
Sep 18 03:33:00.047: INFO: stderr: ""
Sep 18 03:33:00.048: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Sep 18 03:33:00.048: INFO: Waiting for all frontend pods to be Running.
Sep 18 03:33:05.100: INFO: Waiting for frontend to serve content.
Sep 18 03:33:06.693: INFO: Trying to add a new entry to the guestbook.
Sep 18 03:33:06.715: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Sep 18 03:33:06.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9611'
Sep 18 03:33:07.922: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 18 03:33:07.922: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Sep 18 03:33:07.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9611'
Sep 18 03:33:09.088: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 18 03:33:09.088: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Sep 18 03:33:09.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9611'
Sep 18 03:33:10.302: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 18 03:33:10.302: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Sep 18 03:33:10.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9611'
Sep 18 03:33:11.439: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 18 03:33:11.440: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Sep 18 03:33:11.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9611'
Sep 18 03:33:12.650: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 18 03:33:12.650: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Sep 18 03:33:12.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9611'
Sep 18 03:33:13.796: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 18 03:33:13.796: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:33:13.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9611" for this suite.
Sep 18 03:33:55.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:33:56.086: INFO: namespace kubectl-9611 deletion completed in 42.215778491s

• [SLOW TEST:66.054 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:33:56.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:34:56.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8539" for this suite.
Sep 18 03:35:18.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:35:18.432: INFO: namespace container-probe-8539 deletion completed in 22.236980598s

• [SLOW TEST:82.345 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:35:18.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-6jvs
STEP: Creating a pod to test atomic-volume-subpath
Sep 18 03:35:18.575: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6jvs" in namespace "subpath-1928" to be "success or failure"
Sep 18 03:35:18.591: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Pending", Reason="", readiness=false. Elapsed: 15.514383ms
Sep 18 03:35:20.627: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051411793s
Sep 18 03:35:22.634: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Running", Reason="", readiness=true. Elapsed: 4.058117559s
Sep 18 03:35:24.640: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Running", Reason="", readiness=true. Elapsed: 6.064327733s
Sep 18 03:35:26.669: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Running", Reason="", readiness=true. Elapsed: 8.09316307s
Sep 18 03:35:28.676: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Running", Reason="", readiness=true. Elapsed: 10.100269511s
Sep 18 03:35:30.700: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Running", Reason="", readiness=true. Elapsed: 12.124061649s
Sep 18 03:35:32.706: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Running", Reason="", readiness=true. Elapsed: 14.130995521s
Sep 18 03:35:34.714: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Running", Reason="", readiness=true. Elapsed: 16.138941379s
Sep 18 03:35:36.723: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Running", Reason="", readiness=true. Elapsed: 18.147245018s
Sep 18 03:35:38.730: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Running", Reason="", readiness=true. Elapsed: 20.154602872s
Sep 18 03:35:40.738: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Running", Reason="", readiness=true. Elapsed: 22.16223694s
Sep 18 03:35:42.752: INFO: Pod "pod-subpath-test-projected-6jvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.177005601s
STEP: Saw pod success
Sep 18 03:35:42.753: INFO: Pod "pod-subpath-test-projected-6jvs" satisfied condition "success or failure"
Sep 18 03:35:42.757: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-6jvs container test-container-subpath-projected-6jvs: 
STEP: delete the pod
Sep 18 03:35:42.804: INFO: Waiting for pod pod-subpath-test-projected-6jvs to disappear
Sep 18 03:35:42.809: INFO: Pod pod-subpath-test-projected-6jvs no longer exists
STEP: Deleting pod pod-subpath-test-projected-6jvs
Sep 18 03:35:42.809: INFO: Deleting pod "pod-subpath-test-projected-6jvs" in namespace "subpath-1928"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:35:42.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1928" for this suite.
Sep 18 03:35:48.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:35:48.975: INFO: namespace subpath-1928 deletion completed in 6.154880667s

• [SLOW TEST:30.541 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:35:48.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:35:49.071: INFO: Creating deployment "test-recreate-deployment"
Sep 18 03:35:49.086: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Sep 18 03:35:49.136: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Sep 18 03:35:51.189: INFO: Waiting deployment "test-recreate-deployment" to complete
Sep 18 03:35:51.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735996949, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735996949, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735996949, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735996949, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 18 03:35:53.200: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Sep 18 03:35:53.212: INFO: Updating deployment test-recreate-deployment
Sep 18 03:35:53.213: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep 18 03:35:53.587: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4841,SelfLink:/apis/apps/v1/namespaces/deployment-4841/deployments/test-recreate-deployment,UID:c195f8b5-1c9e-4f59-a9bb-f8e22081a617,ResourceVersion:799761,Generation:2,CreationTimestamp:2020-09-18 03:35:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-09-18 03:35:53 +0000 UTC 2020-09-18 03:35:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-09-18 03:35:53 +0000 UTC 2020-09-18 03:35:49 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Sep 18 03:35:53.597: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4841,SelfLink:/apis/apps/v1/namespaces/deployment-4841/replicasets/test-recreate-deployment-5c8c9cc69d,UID:d0308a62-0def-4763-b6d4-5a4fea5e10be,ResourceVersion:799759,Generation:1,CreationTimestamp:2020-09-18 03:35:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c195f8b5-1c9e-4f59-a9bb-f8e22081a617 0x8cdac77 0x8cdac78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 18 03:35:53.597: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Sep 18 03:35:53.599: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4841,SelfLink:/apis/apps/v1/namespaces/deployment-4841/replicasets/test-recreate-deployment-6df85df6b9,UID:5ab56bc9-55af-445e-a1a3-22376b11b90f,ResourceVersion:799750,Generation:2,CreationTimestamp:2020-09-18 03:35:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c195f8b5-1c9e-4f59-a9bb-f8e22081a617 0x8cdad57 0x8cdad58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 18 03:35:53.605: INFO: Pod "test-recreate-deployment-5c8c9cc69d-5mwlb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-5mwlb,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4841,SelfLink:/api/v1/namespaces/deployment-4841/pods/test-recreate-deployment-5c8c9cc69d-5mwlb,UID:c910ee06-5fdf-40a5-a993-f6d68c08a549,ResourceVersion:799762,Generation:0,CreationTimestamp:2020-09-18 03:35:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d d0308a62-0def-4763-b6d4-5a4fea5e10be 0x8cdb677 0x8cdb678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cngmk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cngmk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cngmk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8cdb6f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8cdb710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:35:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:35:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:35:53 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:35:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:35:53.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4841" for this suite.
Sep 18 03:35:59.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:35:59.829: INFO: namespace deployment-4841 deletion completed in 6.215805747s

• [SLOW TEST:10.851 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:35:59.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Sep 18 03:35:59.905: INFO: Waiting up to 5m0s for pod "downward-api-4c9edf90-5f3d-457c-a7a7-98456f8f9bcb" in namespace "downward-api-7659" to be "success or failure"
Sep 18 03:35:59.914: INFO: Pod "downward-api-4c9edf90-5f3d-457c-a7a7-98456f8f9bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.652507ms
Sep 18 03:36:01.922: INFO: Pod "downward-api-4c9edf90-5f3d-457c-a7a7-98456f8f9bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015987369s
Sep 18 03:36:03.930: INFO: Pod "downward-api-4c9edf90-5f3d-457c-a7a7-98456f8f9bcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024033938s
STEP: Saw pod success
Sep 18 03:36:03.930: INFO: Pod "downward-api-4c9edf90-5f3d-457c-a7a7-98456f8f9bcb" satisfied condition "success or failure"
Sep 18 03:36:03.935: INFO: Trying to get logs from node iruya-worker2 pod downward-api-4c9edf90-5f3d-457c-a7a7-98456f8f9bcb container dapi-container: 
STEP: delete the pod
Sep 18 03:36:03.973: INFO: Waiting for pod downward-api-4c9edf90-5f3d-457c-a7a7-98456f8f9bcb to disappear
Sep 18 03:36:03.986: INFO: Pod downward-api-4c9edf90-5f3d-457c-a7a7-98456f8f9bcb no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:36:03.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7659" for this suite.
Sep 18 03:36:10.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:36:10.153: INFO: namespace downward-api-7659 deletion completed in 6.158194331s

• [SLOW TEST:10.321 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:36:10.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Sep 18 03:36:10.769: INFO: created pod pod-service-account-defaultsa
Sep 18 03:36:10.770: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Sep 18 03:36:10.803: INFO: created pod pod-service-account-mountsa
Sep 18 03:36:10.803: INFO: pod pod-service-account-mountsa service account token volume mount: true
Sep 18 03:36:10.836: INFO: created pod pod-service-account-nomountsa
Sep 18 03:36:10.836: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Sep 18 03:36:10.879: INFO: created pod pod-service-account-defaultsa-mountspec
Sep 18 03:36:10.880: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Sep 18 03:36:10.930: INFO: created pod pod-service-account-mountsa-mountspec
Sep 18 03:36:10.930: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Sep 18 03:36:10.943: INFO: created pod pod-service-account-nomountsa-mountspec
Sep 18 03:36:10.944: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Sep 18 03:36:10.960: INFO: created pod pod-service-account-defaultsa-nomountspec
Sep 18 03:36:10.960: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Sep 18 03:36:10.997: INFO: created pod pod-service-account-mountsa-nomountspec
Sep 18 03:36:10.997: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Sep 18 03:36:11.058: INFO: created pod pod-service-account-nomountsa-nomountspec
Sep 18 03:36:11.059: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:36:11.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7317" for this suite.
Sep 18 03:36:39.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:36:39.265: INFO: namespace svcaccounts-7317 deletion completed in 28.182098719s

• [SLOW TEST:29.111 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:36:39.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Sep 18 03:36:39.348: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:36:40.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3030" for this suite.
Sep 18 03:36:46.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:36:46.591: INFO: namespace kubectl-3030 deletion completed in 6.159739111s

• [SLOW TEST:7.319 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:36:46.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:36:46.699: INFO: Creating deployment "nginx-deployment"
Sep 18 03:36:46.711: INFO: Waiting for observed generation 1
Sep 18 03:36:48.722: INFO: Waiting for all required pods to come up
Sep 18 03:36:48.735: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Sep 18 03:36:58.748: INFO: Waiting for deployment "nginx-deployment" to complete
Sep 18 03:36:58.759: INFO: Updating deployment "nginx-deployment" with a non-existent image
Sep 18 03:36:58.769: INFO: Updating deployment nginx-deployment
Sep 18 03:36:58.769: INFO: Waiting for observed generation 2
Sep 18 03:37:00.783: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Sep 18 03:37:00.789: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Sep 18 03:37:00.793: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Sep 18 03:37:00.809: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Sep 18 03:37:00.809: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Sep 18 03:37:00.813: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Sep 18 03:37:00.820: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Sep 18 03:37:00.820: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Sep 18 03:37:00.915: INFO: Updating deployment nginx-deployment
Sep 18 03:37:00.915: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Sep 18 03:37:00.979: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Sep 18 03:37:03.384: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep 18 03:37:03.612: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8825,SelfLink:/apis/apps/v1/namespaces/deployment-8825/deployments/nginx-deployment,UID:d0eaa0ba-9b7d-44b4-a113-4518afe43dbd,ResourceVersion:800297,Generation:3,CreationTimestamp:2020-09-18 03:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-09-18 03:37:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-09-18 03:37:01 +0000 UTC 2020-09-18 03:36:46 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Sep 18 03:37:03.641: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8825,SelfLink:/apis/apps/v1/namespaces/deployment-8825/replicasets/nginx-deployment-55fb7cb77f,UID:8b4deaa6-82b8-466c-9130-3da61c14f650,ResourceVersion:800293,Generation:3,CreationTimestamp:2020-09-18 03:36:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d0eaa0ba-9b7d-44b4-a113-4518afe43dbd 0x7f0bc37 0x7f0bc38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 18 03:37:03.641: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Sep 18 03:37:03.642: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8825,SelfLink:/apis/apps/v1/namespaces/deployment-8825/replicasets/nginx-deployment-7b8c6f4498,UID:fd90448b-a9a9-4937-a1b0-bfb74b262b46,ResourceVersion:800285,Generation:3,CreationTimestamp:2020-09-18 03:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d0eaa0ba-9b7d-44b4-a113-4518afe43dbd 0x7f0bd27 0x7f0bd28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Sep 18 03:37:03.956: INFO: Pod "nginx-deployment-55fb7cb77f-2b9js" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2b9js,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-2b9js,UID:9f0f0c4a-327e-4f57-8249-8931b78b3941,ResourceVersion:800202,Generation:0,CreationTimestamp:2020-09-18 03:36:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8c7b7 0x8b8c7b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8c830} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8c850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:36:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.957: INFO: Pod "nginx-deployment-55fb7cb77f-4tp78" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4tp78,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-4tp78,UID:66ba0a21-2a25-453a-b305-8ac7d921cb5d,ResourceVersion:800205,Generation:0,CreationTimestamp:2020-09-18 03:36:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8c930 0x8b8c931}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8c9b0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8c9d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:36:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.958: INFO: Pod "nginx-deployment-55fb7cb77f-69qdn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-69qdn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-69qdn,UID:4a62de76-f875-4eca-8ac9-5cff5ab289d0,ResourceVersion:800299,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8caa0 0x8b8caa1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8cb20} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8cb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.960: INFO: Pod "nginx-deployment-55fb7cb77f-6zfxs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6zfxs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-6zfxs,UID:531d6384-96e5-4d5f-af92-914cce9db6bc,ResourceVersion:800359,Generation:0,CreationTimestamp:2020-09-18 03:36:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8cc10 0x8b8cc11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8cc90} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8ccb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.79,StartTime:2020-09-18 03:36:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.961: INFO: Pod "nginx-deployment-55fb7cb77f-7cxqw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7cxqw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-7cxqw,UID:3d1f5523-ab59-47a9-8c94-0c0ba23cf4dd,ResourceVersion:800221,Generation:0,CreationTimestamp:2020-09-18 03:36:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8cda0 0x8b8cda1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8ce20} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8ce40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:59 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:36:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.962: INFO: Pod "nginx-deployment-55fb7cb77f-9cm27" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9cm27,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-9cm27,UID:f693ffd6-68a6-464e-9e60-73d089ca10c3,ResourceVersion:800314,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8cf10 0x8b8cf11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8cf90} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8cfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.963: INFO: Pod "nginx-deployment-55fb7cb77f-bp9b7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bp9b7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-bp9b7,UID:a409a2a5-d368-4f84-b26a-8ae43d00edd5,ResourceVersion:800347,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8d080 0x8b8d081}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8d100} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8d120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.964: INFO: Pod "nginx-deployment-55fb7cb77f-dxddc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dxddc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-dxddc,UID:1f8b2cfc-0034-46b6-86f3-bbcfa5821ae0,ResourceVersion:800223,Generation:0,CreationTimestamp:2020-09-18 03:36:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8d1f0 0x8b8d1f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8d270} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8d290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:59 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:36:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.965: INFO: Pod "nginx-deployment-55fb7cb77f-nnmk8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nnmk8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-nnmk8,UID:d1cddb76-3085-4c80-8ffe-ae463de4b1bf,ResourceVersion:800350,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8d360 0x8b8d361}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8d3e0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8d400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.967: INFO: Pod "nginx-deployment-55fb7cb77f-slnvw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-slnvw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-slnvw,UID:705a2ad9-c91f-4f1a-bc81-4758b963e1c2,ResourceVersion:800353,Generation:0,CreationTimestamp:2020-09-18 03:37:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8d4d0 0x8b8d4d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8d550} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8d570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.968: INFO: Pod "nginx-deployment-55fb7cb77f-vwqdk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vwqdk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-vwqdk,UID:8f5fa2c3-b2ae-4557-a63e-657d043b3bd4,ResourceVersion:800323,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8d640 0x8b8d641}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8d6c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8d6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.969: INFO: Pod "nginx-deployment-55fb7cb77f-xzddz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xzddz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-xzddz,UID:20359a72-6db7-47e1-becb-c8b6488f9b6e,ResourceVersion:800358,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8d7b0 0x8b8d7b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8d830} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8d850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.970: INFO: Pod "nginx-deployment-55fb7cb77f-z7q75" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z7q75,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-55fb7cb77f-z7q75,UID:fd414c36-17db-45d8-aa83-a6ea3ca5c381,ResourceVersion:800302,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8b4deaa6-82b8-466c-9130-3da61c14f650 0x8b8d920 0x8b8d921}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8d9a0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8d9c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.971: INFO: Pod "nginx-deployment-7b8c6f4498-28k9p" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-28k9p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-28k9p,UID:cd77805c-6c65-441a-b8d6-efd086b6b85a,ResourceVersion:800113,Generation:0,CreationTimestamp:2020-09-18 03:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x8b8da90 0x8b8da91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8db00} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8db20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.74,StartTime:2020-09-18 03:36:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-18 03:36:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://84f39971e7bb6858fe2072704efc00238a9b58f14918f3ee50d429789f3f2e21}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.972: INFO: Pod "nginx-deployment-7b8c6f4498-2mrdg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2mrdg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-2mrdg,UID:8802e1b7-1248-45cc-a2de-7b05829cec8d,ResourceVersion:800156,Generation:0,CreationTimestamp:2020-09-18 03:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x8b8dbf0 0x8b8dbf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8dc60} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8dc80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.115,StartTime:2020-09-18 03:36:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-18 03:36:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3189ba89c43d3f8d5e790f76e5c759b848e58f22162285d0fdf21f4916b71089}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.973: INFO: Pod "nginx-deployment-7b8c6f4498-5jrbv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5jrbv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-5jrbv,UID:11764b8b-e050-47a7-97fd-44e12dd9d3fe,ResourceVersion:800140,Generation:0,CreationTimestamp:2020-09-18 03:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x8b8dd50 0x8b8dd51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8ddc0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8dde0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.114,StartTime:2020-09-18 03:36:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-18 03:36:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://43ee634fd1317bfeb08bc9d67aa9cfd0d13374c3169c18dd0c63db807ab1c658}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.974: INFO: Pod "nginx-deployment-7b8c6f4498-67g98" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-67g98,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-67g98,UID:79f47633-8442-4964-8a57-1dc6e9603804,ResourceVersion:800325,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x8b8deb0 0x8b8deb1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8b8df20} {node.kubernetes.io/unreachable Exists  NoExecute 0x8b8df40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.976: INFO: Pod "nginx-deployment-7b8c6f4498-7x5ft" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7x5ft,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-7x5ft,UID:8e4963fe-384d-4be4-a69e-c382f7924114,ResourceVersion:800319,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9700000 0x9700001}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9700070} {node.kubernetes.io/unreachable Exists  NoExecute 0x9700090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.977: INFO: Pod "nginx-deployment-7b8c6f4498-8gskh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8gskh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-8gskh,UID:9e3c651f-77de-4a5e-8155-65a644075660,ResourceVersion:800351,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9700160 0x9700161}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x97001d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x97001f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.978: INFO: Pod "nginx-deployment-7b8c6f4498-cd9f4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cd9f4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-cd9f4,UID:874ca042-a3b4-416e-a560-1ab8ccc299e5,ResourceVersion:800152,Generation:0,CreationTimestamp:2020-09-18 03:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x97002b0 0x97002b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9700320} {node.kubernetes.io/unreachable Exists  NoExecute 0x9700340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.78,StartTime:2020-09-18 03:36:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-18 03:36:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b22c8fbaaec36abe5f1dfb5793a4a5ac99f2e9df0cab171f0e93df9c8f158fd2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.979: INFO: Pod "nginx-deployment-7b8c6f4498-dfvd9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dfvd9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-dfvd9,UID:fbd0cdc5-06f3-47b6-a8c6-396ab8616535,ResourceVersion:800306,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9700410 0x9700411}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9700480} {node.kubernetes.io/unreachable Exists  NoExecute 0x97004a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.980: INFO: Pod "nginx-deployment-7b8c6f4498-dvx45" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dvx45,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-dvx45,UID:66a597db-e588-4df3-b112-23054e92a483,ResourceVersion:800335,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9700560 0x9700561}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x97005d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x97005f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.981: INFO: Pod "nginx-deployment-7b8c6f4498-f4wsj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f4wsj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-f4wsj,UID:53da5093-94f5-4a9f-afa3-6d7054fb9703,ResourceVersion:800149,Generation:0,CreationTimestamp:2020-09-18 03:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x97006b0 0x97006b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9700720} {node.kubernetes.io/unreachable Exists  NoExecute 0x9700740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.77,StartTime:2020-09-18 03:36:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-18 03:36:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://18d912f04af4cafbe6e5952efc352d746c8277cd86d48c76a31b0fc605b4725d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.982: INFO: Pod "nginx-deployment-7b8c6f4498-f5pxh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f5pxh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-f5pxh,UID:db2a2430-669c-420f-8c89-1a655f0404d1,ResourceVersion:800304,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9700810 0x9700811}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9700880} {node.kubernetes.io/unreachable Exists  NoExecute 0x97008a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.983: INFO: Pod "nginx-deployment-7b8c6f4498-f7bml" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f7bml,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-f7bml,UID:eb39afca-0696-4be1-b5d8-f4ad7ef38bd1,ResourceVersion:800331,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9700960 0x9700961}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x97009d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x97009f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.984: INFO: Pod "nginx-deployment-7b8c6f4498-g75pv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g75pv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-g75pv,UID:4b85a816-244b-42fb-ae08-5806ced0d2fa,ResourceVersion:800360,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9700ab0 0x9700ab1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9700b20} {node.kubernetes.io/unreachable Exists  NoExecute 0x9700b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.986: INFO: Pod "nginx-deployment-7b8c6f4498-hbrmg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hbrmg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-hbrmg,UID:5cc4c3bf-f382-4398-87aa-e7601179f2e3,ResourceVersion:800294,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9700c00 0x9700c01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9700c70} {node.kubernetes.io/unreachable Exists  NoExecute 0x9700c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.987: INFO: Pod "nginx-deployment-7b8c6f4498-szvpt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-szvpt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-szvpt,UID:2b8ef606-7a2a-4c88-883b-033477b743eb,ResourceVersion:800329,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9700d50 0x9700d51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9700dc0} {node.kubernetes.io/unreachable Exists  NoExecute 0x9700de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.989: INFO: Pod "nginx-deployment-7b8c6f4498-vlzvd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vlzvd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-vlzvd,UID:8a59e068-2123-4870-a1ea-a018aaf72ae9,ResourceVersion:800121,Generation:0,CreationTimestamp:2020-09-18 03:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9700ea0 0x9700ea1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9700f10} {node.kubernetes.io/unreachable Exists  NoExecute 0x9700f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.75,StartTime:2020-09-18 03:36:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-18 03:36:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1a909bb5b69afa0920b414a8f4a1286967c70a1501766a39ba438ba39e9c80f1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.990: INFO: Pod "nginx-deployment-7b8c6f4498-wn2x8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wn2x8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-wn2x8,UID:c71704ba-8a1d-45a7-8809-2290e1a7c737,ResourceVersion:800145,Generation:0,CreationTimestamp:2020-09-18 03:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9701000 0x9701001}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9701070} {node.kubernetes.io/unreachable Exists  NoExecute 0x9701090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.113,StartTime:2020-09-18 03:36:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-18 03:36:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f0aa141a37d57ae96e46d55656fd0ae1d9e3e24b0f57ebac6a3e82ba34079fbe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.991: INFO: Pod "nginx-deployment-7b8c6f4498-x79wt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x79wt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-x79wt,UID:1e4335a3-2d51-4463-bfa0-221f67a8a7da,ResourceVersion:800316,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9701160 0x9701161}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x97011d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x97011f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-18 03:37:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.992: INFO: Pod "nginx-deployment-7b8c6f4498-zgzmv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zgzmv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-zgzmv,UID:d143c4cb-23a7-456d-965a-452da7fba29c,ResourceVersion:800125,Generation:0,CreationTimestamp:2020-09-18 03:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x97012b0 0x97012b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9701320} {node.kubernetes.io/unreachable Exists  NoExecute 0x9701340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:36:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.76,StartTime:2020-09-18 03:36:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-18 03:36:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://35a2d94cca69f1cce6ccb831bfa5d76305933943ed4f3b50543700d3d7ff49fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 18 03:37:03.993: INFO: Pod "nginx-deployment-7b8c6f4498-zpxvj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zpxvj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8825,SelfLink:/api/v1/namespaces/deployment-8825/pods/nginx-deployment-7b8c6f4498-zpxvj,UID:68474a68-51f3-4632-b64e-49b36201d34f,ResourceVersion:800288,Generation:0,CreationTimestamp:2020-09-18 03:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fd90448b-a9a9-4937-a1b0-bfb74b262b46 0x9701410 0x9701411}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j97z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j97z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6j97z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x9701480} {node.kubernetes.io/unreachable Exists  NoExecute 0x97014a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:37:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-18 03:37:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:37:03.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8825" for this suite.
Sep 18 03:37:24.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:37:25.083: INFO: namespace deployment-8825 deletion completed in 20.841397938s

• [SLOW TEST:38.485 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:37:25.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Sep 18 03:37:25.221: INFO: PodSpec: initContainers in spec.initContainers
Sep 18 03:38:17.328: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8607070b-1b2d-44dc-905a-260ac0339336", GenerateName:"", Namespace:"init-container-6889", SelfLink:"/api/v1/namespaces/init-container-6889/pods/pod-init-8607070b-1b2d-44dc-905a-260ac0339336", UID:"c010b635-0c65-42e9-a0c7-2355ee58bd35", ResourceVersion:"800743", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735997045, loc:(*time.Location)(0x67985e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"220746071"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rtttp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x935e240), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rtttp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rtttp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rtttp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x94b0348), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x7464ff0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x94b03d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x94b03f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x94b03f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x94b03fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997045, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997045, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997045, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997045, loc:(*time.Location)(0x67985e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.2.131", StartTime:(*v1.Time)(0x935e320), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x935e340), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x9748780)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://8d4f75178b213f22a37155145fd4028238807c8bc99573996baeb5d1932b4fac"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x893e0b0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x893e0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:38:17.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6889" for this suite.
Sep 18 03:38:39.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:38:39.601: INFO: namespace init-container-6889 deletion completed in 22.183732754s

• [SLOW TEST:74.512 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:38:39.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Sep 18 03:38:39.710: INFO: Waiting up to 5m0s for pod "downward-api-123b4fcf-472c-4b6e-8700-db3c7789d3f5" in namespace "downward-api-5854" to be "success or failure"
Sep 18 03:38:39.773: INFO: Pod "downward-api-123b4fcf-472c-4b6e-8700-db3c7789d3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 62.342233ms
Sep 18 03:38:41.781: INFO: Pod "downward-api-123b4fcf-472c-4b6e-8700-db3c7789d3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070304029s
Sep 18 03:38:43.787: INFO: Pod "downward-api-123b4fcf-472c-4b6e-8700-db3c7789d3f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077067581s
STEP: Saw pod success
Sep 18 03:38:43.788: INFO: Pod "downward-api-123b4fcf-472c-4b6e-8700-db3c7789d3f5" satisfied condition "success or failure"
Sep 18 03:38:43.792: INFO: Trying to get logs from node iruya-worker2 pod downward-api-123b4fcf-472c-4b6e-8700-db3c7789d3f5 container dapi-container: 
STEP: delete the pod
Sep 18 03:38:43.823: INFO: Waiting for pod downward-api-123b4fcf-472c-4b6e-8700-db3c7789d3f5 to disappear
Sep 18 03:38:43.833: INFO: Pod downward-api-123b4fcf-472c-4b6e-8700-db3c7789d3f5 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:38:43.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5854" for this suite.
Sep 18 03:38:49.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:38:49.999: INFO: namespace downward-api-5854 deletion completed in 6.157200242s

• [SLOW TEST:10.394 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:38:50.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4565/configmap-test-789d24d1-2985-4c83-9b7c-5e510c436401
STEP: Creating a pod to test consume configMaps
Sep 18 03:38:50.100: INFO: Waiting up to 5m0s for pod "pod-configmaps-ca8e8f79-6333-4ea6-8843-ce1b00e440b1" in namespace "configmap-4565" to be "success or failure"
Sep 18 03:38:50.125: INFO: Pod "pod-configmaps-ca8e8f79-6333-4ea6-8843-ce1b00e440b1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.508421ms
Sep 18 03:38:52.141: INFO: Pod "pod-configmaps-ca8e8f79-6333-4ea6-8843-ce1b00e440b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040966629s
Sep 18 03:38:54.148: INFO: Pod "pod-configmaps-ca8e8f79-6333-4ea6-8843-ce1b00e440b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048080986s
STEP: Saw pod success
Sep 18 03:38:54.148: INFO: Pod "pod-configmaps-ca8e8f79-6333-4ea6-8843-ce1b00e440b1" satisfied condition "success or failure"
Sep 18 03:38:54.154: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ca8e8f79-6333-4ea6-8843-ce1b00e440b1 container env-test: 
STEP: delete the pod
Sep 18 03:38:54.178: INFO: Waiting for pod pod-configmaps-ca8e8f79-6333-4ea6-8843-ce1b00e440b1 to disappear
Sep 18 03:38:54.188: INFO: Pod pod-configmaps-ca8e8f79-6333-4ea6-8843-ce1b00e440b1 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:38:54.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4565" for this suite.
Sep 18 03:39:00.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:39:00.345: INFO: namespace configmap-4565 deletion completed in 6.150123719s

• [SLOW TEST:10.344 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:39:00.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 18 03:39:00.447: INFO: Waiting up to 5m0s for pod "pod-67951c97-f7e3-4bcf-90a5-65e16aefaba8" in namespace "emptydir-6759" to be "success or failure"
Sep 18 03:39:00.491: INFO: Pod "pod-67951c97-f7e3-4bcf-90a5-65e16aefaba8": Phase="Pending", Reason="", readiness=false. Elapsed: 43.905419ms
Sep 18 03:39:02.499: INFO: Pod "pod-67951c97-f7e3-4bcf-90a5-65e16aefaba8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052062452s
Sep 18 03:39:04.507: INFO: Pod "pod-67951c97-f7e3-4bcf-90a5-65e16aefaba8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059739457s
STEP: Saw pod success
Sep 18 03:39:04.507: INFO: Pod "pod-67951c97-f7e3-4bcf-90a5-65e16aefaba8" satisfied condition "success or failure"
Sep 18 03:39:04.511: INFO: Trying to get logs from node iruya-worker2 pod pod-67951c97-f7e3-4bcf-90a5-65e16aefaba8 container test-container: 
STEP: delete the pod
Sep 18 03:39:04.686: INFO: Waiting for pod pod-67951c97-f7e3-4bcf-90a5-65e16aefaba8 to disappear
Sep 18 03:39:04.698: INFO: Pod pod-67951c97-f7e3-4bcf-90a5-65e16aefaba8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:39:04.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6759" for this suite.
Sep 18 03:39:10.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:39:10.881: INFO: namespace emptydir-6759 deletion completed in 6.175549314s

• [SLOW TEST:10.532 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:39:10.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:39:10.989: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20162378-c348-4883-94ea-e79ec62a038c" in namespace "projected-2260" to be "success or failure"
Sep 18 03:39:11.115: INFO: Pod "downwardapi-volume-20162378-c348-4883-94ea-e79ec62a038c": Phase="Pending", Reason="", readiness=false. Elapsed: 125.224428ms
Sep 18 03:39:13.121: INFO: Pod "downwardapi-volume-20162378-c348-4883-94ea-e79ec62a038c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132110891s
Sep 18 03:39:15.129: INFO: Pod "downwardapi-volume-20162378-c348-4883-94ea-e79ec62a038c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139790713s
STEP: Saw pod success
Sep 18 03:39:15.129: INFO: Pod "downwardapi-volume-20162378-c348-4883-94ea-e79ec62a038c" satisfied condition "success or failure"
Sep 18 03:39:15.134: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-20162378-c348-4883-94ea-e79ec62a038c container client-container: 
STEP: delete the pod
Sep 18 03:39:15.159: INFO: Waiting for pod downwardapi-volume-20162378-c348-4883-94ea-e79ec62a038c to disappear
Sep 18 03:39:15.163: INFO: Pod downwardapi-volume-20162378-c348-4883-94ea-e79ec62a038c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:39:15.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2260" for this suite.
Sep 18 03:39:21.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:39:21.343: INFO: namespace projected-2260 deletion completed in 6.17160337s

• [SLOW TEST:10.459 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:39:21.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:39:21.442: INFO: Waiting up to 5m0s for pod "downwardapi-volume-825c0722-b4ec-4113-82d3-466bdfd4ac54" in namespace "downward-api-8848" to be "success or failure"
Sep 18 03:39:21.474: INFO: Pod "downwardapi-volume-825c0722-b4ec-4113-82d3-466bdfd4ac54": Phase="Pending", Reason="", readiness=false. Elapsed: 30.994776ms
Sep 18 03:39:23.480: INFO: Pod "downwardapi-volume-825c0722-b4ec-4113-82d3-466bdfd4ac54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036895738s
Sep 18 03:39:25.486: INFO: Pod "downwardapi-volume-825c0722-b4ec-4113-82d3-466bdfd4ac54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043824909s
STEP: Saw pod success
Sep 18 03:39:25.487: INFO: Pod "downwardapi-volume-825c0722-b4ec-4113-82d3-466bdfd4ac54" satisfied condition "success or failure"
Sep 18 03:39:25.491: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-825c0722-b4ec-4113-82d3-466bdfd4ac54 container client-container: 
STEP: delete the pod
Sep 18 03:39:25.540: INFO: Waiting for pod downwardapi-volume-825c0722-b4ec-4113-82d3-466bdfd4ac54 to disappear
Sep 18 03:39:25.547: INFO: Pod downwardapi-volume-825c0722-b4ec-4113-82d3-466bdfd4ac54 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:39:25.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8848" for this suite.
Sep 18 03:39:31.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:39:31.754: INFO: namespace downward-api-8848 deletion completed in 6.199157294s

• [SLOW TEST:10.409 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:39:31.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:39:31.913: INFO: Pod name rollover-pod: Found 0 pods out of 1
Sep 18 03:39:36.920: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep 18 03:39:36.921: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Sep 18 03:39:38.927: INFO: Creating deployment "test-rollover-deployment"
Sep 18 03:39:38.937: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Sep 18 03:39:40.976: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Sep 18 03:39:40.992: INFO: Ensure that both replica sets have 1 created replica
Sep 18 03:39:41.001: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Sep 18 03:39:41.009: INFO: Updating deployment test-rollover-deployment
Sep 18 03:39:41.010: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Sep 18 03:39:43.594: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Sep 18 03:39:43.616: INFO: Make sure deployment "test-rollover-deployment" is complete
Sep 18 03:39:43.627: INFO: all replica sets need to contain the pod-template-hash label
Sep 18 03:39:43.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997181, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997178, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 18 03:39:45.643: INFO: all replica sets need to contain the pod-template-hash label
Sep 18 03:39:45.644: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997185, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997178, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 18 03:39:47.662: INFO: all replica sets need to contain the pod-template-hash label
Sep 18 03:39:47.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997185, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997178, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 18 03:39:49.642: INFO: all replica sets need to contain the pod-template-hash label
Sep 18 03:39:49.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997185, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997178, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 18 03:39:51.647: INFO: all replica sets need to contain the pod-template-hash label
Sep 18 03:39:51.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997185, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997178, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 18 03:39:53.648: INFO: all replica sets need to contain the pod-template-hash label
Sep 18 03:39:53.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997179, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997185, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735997178, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 18 03:39:55.644: INFO: 
Sep 18 03:39:55.644: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep 18 03:39:55.660: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-747,SelfLink:/apis/apps/v1/namespaces/deployment-747/deployments/test-rollover-deployment,UID:1a0a394e-a3a3-4217-8d4a-4fb85b38b9cb,ResourceVersion:801147,Generation:2,CreationTimestamp:2020-09-18 03:39:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-18 03:39:39 +0000 UTC 2020-09-18 03:39:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-18 03:39:55 +0000 UTC 2020-09-18 03:39:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Sep 18 03:39:55.668: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-747,SelfLink:/apis/apps/v1/namespaces/deployment-747/replicasets/test-rollover-deployment-854595fc44,UID:f0a39fc1-e195-4ce4-9c51-650623991a86,ResourceVersion:801136,Generation:2,CreationTimestamp:2020-09-18 03:39:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1a0a394e-a3a3-4217-8d4a-4fb85b38b9cb 0x95dd447 0x95dd448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep 18 03:39:55.668: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Sep 18 03:39:55.669: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-747,SelfLink:/apis/apps/v1/namespaces/deployment-747/replicasets/test-rollover-controller,UID:d6ecd271-e657-49c4-a8c9-eb791d7fe361,ResourceVersion:801145,Generation:2,CreationTimestamp:2020-09-18 03:39:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1a0a394e-a3a3-4217-8d4a-4fb85b38b9cb 0x95dd377 0x95dd378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 18 03:39:55.671: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-747,SelfLink:/apis/apps/v1/namespaces/deployment-747/replicasets/test-rollover-deployment-9b8b997cf,UID:ab245bdc-9c81-4d34-8213-1abb09e139c1,ResourceVersion:801098,Generation:2,CreationTimestamp:2020-09-18 03:39:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1a0a394e-a3a3-4217-8d4a-4fb85b38b9cb 0x95dd510 0x95dd511}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 18 03:39:55.679: INFO: Pod "test-rollover-deployment-854595fc44-z8wcb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-z8wcb,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-747,SelfLink:/api/v1/namespaces/deployment-747/pods/test-rollover-deployment-854595fc44-z8wcb,UID:197771d2-60c4-4f37-be51-a0bb10b07179,ResourceVersion:801114,Generation:0,CreationTimestamp:2020-09-18 03:39:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 f0a39fc1-e195-4ce4-9c51-650623991a86 0x9504257 0x9504258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-v6rgw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v6rgw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-v6rgw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x95043d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x95043f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:39:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:39:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:39:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:39:41 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.135,StartTime:2020-09-18 03:39:41 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-18 03:39:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://9bde8d496bc18e370f8cbab64c948ad76363fc631abb24e45bd2befe157f21ce}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:39:55.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-747" for this suite.
Sep 18 03:40:01.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:40:01.951: INFO: namespace deployment-747 deletion completed in 6.26522291s

• [SLOW TEST:30.194 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:40:01.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-453
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-453 to expose endpoints map[]
Sep 18 03:40:02.114: INFO: successfully validated that service endpoint-test2 in namespace services-453 exposes endpoints map[] (16.99767ms elapsed)
STEP: Creating pod pod1 in namespace services-453
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-453 to expose endpoints map[pod1:[80]]
Sep 18 03:40:05.184: INFO: successfully validated that service endpoint-test2 in namespace services-453 exposes endpoints map[pod1:[80]] (3.056696593s elapsed)
STEP: Creating pod pod2 in namespace services-453
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-453 to expose endpoints map[pod1:[80] pod2:[80]]
Sep 18 03:40:09.296: INFO: successfully validated that service endpoint-test2 in namespace services-453 exposes endpoints map[pod1:[80] pod2:[80]] (4.104605817s elapsed)
STEP: Deleting pod pod1 in namespace services-453
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-453 to expose endpoints map[pod2:[80]]
Sep 18 03:40:09.387: INFO: successfully validated that service endpoint-test2 in namespace services-453 exposes endpoints map[pod2:[80]] (82.939205ms elapsed)
STEP: Deleting pod pod2 in namespace services-453
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-453 to expose endpoints map[]
Sep 18 03:40:09.399: INFO: successfully validated that service endpoint-test2 in namespace services-453 exposes endpoints map[] (5.795126ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:40:09.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-453" for this suite.
Sep 18 03:40:15.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:40:15.831: INFO: namespace services-453 deletion completed in 6.171169993s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:13.879 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:40:15.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:40:15.926: INFO: Waiting up to 5m0s for pod "downwardapi-volume-347385d9-04c8-48fa-97ac-3318afbc86b2" in namespace "downward-api-4784" to be "success or failure"
Sep 18 03:40:15.965: INFO: Pod "downwardapi-volume-347385d9-04c8-48fa-97ac-3318afbc86b2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.418019ms
Sep 18 03:40:17.972: INFO: Pod "downwardapi-volume-347385d9-04c8-48fa-97ac-3318afbc86b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046185219s
Sep 18 03:40:19.978: INFO: Pod "downwardapi-volume-347385d9-04c8-48fa-97ac-3318afbc86b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052096484s
STEP: Saw pod success
Sep 18 03:40:19.978: INFO: Pod "downwardapi-volume-347385d9-04c8-48fa-97ac-3318afbc86b2" satisfied condition "success or failure"
Sep 18 03:40:19.982: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-347385d9-04c8-48fa-97ac-3318afbc86b2 container client-container: 
STEP: delete the pod
Sep 18 03:40:20.033: INFO: Waiting for pod downwardapi-volume-347385d9-04c8-48fa-97ac-3318afbc86b2 to disappear
Sep 18 03:40:20.059: INFO: Pod downwardapi-volume-347385d9-04c8-48fa-97ac-3318afbc86b2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:40:20.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4784" for this suite.
Sep 18 03:40:26.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:40:26.234: INFO: namespace downward-api-4784 deletion completed in 6.164139074s

• [SLOW TEST:10.401 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:40:26.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 18 03:40:26.325: INFO: Waiting up to 5m0s for pod "pod-3a7db7cf-23cf-41b7-8387-7071ba834902" in namespace "emptydir-8470" to be "success or failure"
Sep 18 03:40:26.335: INFO: Pod "pod-3a7db7cf-23cf-41b7-8387-7071ba834902": Phase="Pending", Reason="", readiness=false. Elapsed: 9.708319ms
Sep 18 03:40:28.341: INFO: Pod "pod-3a7db7cf-23cf-41b7-8387-7071ba834902": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016540525s
Sep 18 03:40:30.349: INFO: Pod "pod-3a7db7cf-23cf-41b7-8387-7071ba834902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023663318s
STEP: Saw pod success
Sep 18 03:40:30.349: INFO: Pod "pod-3a7db7cf-23cf-41b7-8387-7071ba834902" satisfied condition "success or failure"
Sep 18 03:40:30.354: INFO: Trying to get logs from node iruya-worker2 pod pod-3a7db7cf-23cf-41b7-8387-7071ba834902 container test-container: 
STEP: delete the pod
Sep 18 03:40:30.377: INFO: Waiting for pod pod-3a7db7cf-23cf-41b7-8387-7071ba834902 to disappear
Sep 18 03:40:30.382: INFO: Pod pod-3a7db7cf-23cf-41b7-8387-7071ba834902 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:40:30.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8470" for this suite.
Sep 18 03:40:36.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:40:36.554: INFO: namespace emptydir-8470 deletion completed in 6.163943307s

• [SLOW TEST:10.318 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:40:36.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:40:41.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6368" for this suite.
Sep 18 03:41:11.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:41:11.855: INFO: namespace replication-controller-6368 deletion completed in 30.141701474s

• [SLOW TEST:35.298 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:41:11.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 18 03:41:11.937: INFO: Waiting up to 5m0s for pod "pod-9b49053f-b84c-4cfc-aec5-20e795fa95c1" in namespace "emptydir-3912" to be "success or failure"
Sep 18 03:41:12.020: INFO: Pod "pod-9b49053f-b84c-4cfc-aec5-20e795fa95c1": Phase="Pending", Reason="", readiness=false. Elapsed: 82.928084ms
Sep 18 03:41:14.026: INFO: Pod "pod-9b49053f-b84c-4cfc-aec5-20e795fa95c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088810792s
Sep 18 03:41:16.033: INFO: Pod "pod-9b49053f-b84c-4cfc-aec5-20e795fa95c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095541369s
STEP: Saw pod success
Sep 18 03:41:16.033: INFO: Pod "pod-9b49053f-b84c-4cfc-aec5-20e795fa95c1" satisfied condition "success or failure"
Sep 18 03:41:16.097: INFO: Trying to get logs from node iruya-worker pod pod-9b49053f-b84c-4cfc-aec5-20e795fa95c1 container test-container: 
STEP: delete the pod
Sep 18 03:41:16.360: INFO: Waiting for pod pod-9b49053f-b84c-4cfc-aec5-20e795fa95c1 to disappear
Sep 18 03:41:16.383: INFO: Pod pod-9b49053f-b84c-4cfc-aec5-20e795fa95c1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:41:16.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3912" for this suite.
Sep 18 03:41:22.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:41:22.566: INFO: namespace emptydir-3912 deletion completed in 6.172998943s

• [SLOW TEST:10.708 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:41:22.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Sep 18 03:41:22.679: INFO: Waiting up to 5m0s for pod "var-expansion-4f462819-9a1c-4e56-8a8a-57b42e7697b3" in namespace "var-expansion-9893" to be "success or failure"
Sep 18 03:41:22.694: INFO: Pod "var-expansion-4f462819-9a1c-4e56-8a8a-57b42e7697b3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.49051ms
Sep 18 03:41:24.700: INFO: Pod "var-expansion-4f462819-9a1c-4e56-8a8a-57b42e7697b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020160237s
Sep 18 03:41:26.708: INFO: Pod "var-expansion-4f462819-9a1c-4e56-8a8a-57b42e7697b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028195197s
STEP: Saw pod success
Sep 18 03:41:26.708: INFO: Pod "var-expansion-4f462819-9a1c-4e56-8a8a-57b42e7697b3" satisfied condition "success or failure"
Sep 18 03:41:26.712: INFO: Trying to get logs from node iruya-worker pod var-expansion-4f462819-9a1c-4e56-8a8a-57b42e7697b3 container dapi-container: 
STEP: delete the pod
Sep 18 03:41:26.731: INFO: Waiting for pod var-expansion-4f462819-9a1c-4e56-8a8a-57b42e7697b3 to disappear
Sep 18 03:41:26.735: INFO: Pod var-expansion-4f462819-9a1c-4e56-8a8a-57b42e7697b3 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:41:26.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9893" for this suite.
Sep 18 03:41:32.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:41:32.901: INFO: namespace var-expansion-9893 deletion completed in 6.159102352s

• [SLOW TEST:10.332 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:41:32.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Sep 18 03:41:37.530: INFO: Successfully updated pod "labelsupdate2a0b139b-7977-4ccc-a655-dc5b75a74c54"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:41:41.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1873" for this suite.
Sep 18 03:42:03.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:42:03.746: INFO: namespace projected-1873 deletion completed in 22.170150372s

• [SLOW TEST:30.842 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:42:03.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Sep 18 03:42:03.799: INFO: Waiting up to 5m0s for pod "client-containers-974fc3a4-ca20-47e5-a470-528301df536b" in namespace "containers-8690" to be "success or failure"
Sep 18 03:42:03.815: INFO: Pod "client-containers-974fc3a4-ca20-47e5-a470-528301df536b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.328326ms
Sep 18 03:42:05.822: INFO: Pod "client-containers-974fc3a4-ca20-47e5-a470-528301df536b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022353007s
Sep 18 03:42:07.829: INFO: Pod "client-containers-974fc3a4-ca20-47e5-a470-528301df536b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029915298s
STEP: Saw pod success
Sep 18 03:42:07.829: INFO: Pod "client-containers-974fc3a4-ca20-47e5-a470-528301df536b" satisfied condition "success or failure"
Sep 18 03:42:07.835: INFO: Trying to get logs from node iruya-worker2 pod client-containers-974fc3a4-ca20-47e5-a470-528301df536b container test-container: 
STEP: delete the pod
Sep 18 03:42:07.906: INFO: Waiting for pod client-containers-974fc3a4-ca20-47e5-a470-528301df536b to disappear
Sep 18 03:42:07.995: INFO: Pod client-containers-974fc3a4-ca20-47e5-a470-528301df536b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:42:07.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8690" for this suite.
Sep 18 03:42:14.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:42:14.187: INFO: namespace containers-8690 deletion completed in 6.183878938s

• [SLOW TEST:10.438 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:42:14.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep 18 03:42:22.374: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 18 03:42:22.427: INFO: Pod pod-with-poststart-http-hook still exists
Sep 18 03:42:24.428: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 18 03:42:24.435: INFO: Pod pod-with-poststart-http-hook still exists
Sep 18 03:42:26.428: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 18 03:42:26.435: INFO: Pod pod-with-poststart-http-hook still exists
Sep 18 03:42:28.428: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 18 03:42:28.434: INFO: Pod pod-with-poststart-http-hook still exists
Sep 18 03:42:30.428: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 18 03:42:30.435: INFO: Pod pod-with-poststart-http-hook still exists
Sep 18 03:42:32.428: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 18 03:42:32.436: INFO: Pod pod-with-poststart-http-hook still exists
Sep 18 03:42:34.428: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 18 03:42:34.436: INFO: Pod pod-with-poststart-http-hook still exists
Sep 18 03:42:36.428: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 18 03:42:36.469: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:42:36.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7360" for this suite.
Sep 18 03:42:58.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:42:58.723: INFO: namespace container-lifecycle-hook-7360 deletion completed in 22.243384745s

• [SLOW TEST:44.535 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:42:58.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Sep 18 03:42:58.775: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep 18 03:42:58.807: INFO: Waiting for terminating namespaces to be deleted...
Sep 18 03:42:58.811: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Sep 18 03:42:58.822: INFO: kindnet-85m7h from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 03:42:58.823: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 18 03:42:58.823: INFO: kube-proxy-xbqp2 from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 03:42:58.823: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 18 03:42:58.823: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Sep 18 03:42:58.855: INFO: kube-proxy-v7g67 from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 03:42:58.855: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 18 03:42:58.855: INFO: kindnet-jxh2j from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 03:42:58.855: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.1635c3d8e36a45cd], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:42:59.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1623" for this suite.
Sep 18 03:43:05.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:43:06.084: INFO: namespace sched-pred-1623 deletion completed in 6.165989672s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.356 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:43:06.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Sep 18 03:43:06.192: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6715,SelfLink:/api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-label-changed,UID:477b6ee7-fb60-4dc6-9afe-b5ff19795483,ResourceVersion:801838,Generation:0,CreationTimestamp:2020-09-18 03:43:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 18 03:43:06.192: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6715,SelfLink:/api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-label-changed,UID:477b6ee7-fb60-4dc6-9afe-b5ff19795483,ResourceVersion:801839,Generation:0,CreationTimestamp:2020-09-18 03:43:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Sep 18 03:43:06.193: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6715,SelfLink:/api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-label-changed,UID:477b6ee7-fb60-4dc6-9afe-b5ff19795483,ResourceVersion:801840,Generation:0,CreationTimestamp:2020-09-18 03:43:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Sep 18 03:43:16.227: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6715,SelfLink:/api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-label-changed,UID:477b6ee7-fb60-4dc6-9afe-b5ff19795483,ResourceVersion:801861,Generation:0,CreationTimestamp:2020-09-18 03:43:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 18 03:43:16.228: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6715,SelfLink:/api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-label-changed,UID:477b6ee7-fb60-4dc6-9afe-b5ff19795483,ResourceVersion:801862,Generation:0,CreationTimestamp:2020-09-18 03:43:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Sep 18 03:43:16.229: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6715,SelfLink:/api/v1/namespaces/watch-6715/configmaps/e2e-watch-test-label-changed,UID:477b6ee7-fb60-4dc6-9afe-b5ff19795483,ResourceVersion:801863,Generation:0,CreationTimestamp:2020-09-18 03:43:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:43:16.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6715" for this suite.
Sep 18 03:43:22.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:43:22.422: INFO: namespace watch-6715 deletion completed in 6.184067133s

• [SLOW TEST:16.333 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:43:22.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 18 03:43:22.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-412'
Sep 18 03:43:26.202: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 18 03:43:26.202: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Sep 18 03:43:26.222: INFO: scanned /root for discovery docs: 
Sep 18 03:43:26.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-412'
Sep 18 03:43:43.506: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Sep 18 03:43:43.506: INFO: stdout: "Created e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8\nScaling up e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Sep 18 03:43:43.507: INFO: stdout: "Created e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8\nScaling up e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Sep 18 03:43:43.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-412'
Sep 18 03:43:44.625: INFO: stderr: ""
Sep 18 03:43:44.626: INFO: stdout: "e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8-pt5kb "
Sep 18 03:43:44.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8-pt5kb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-412'
Sep 18 03:43:45.736: INFO: stderr: ""
Sep 18 03:43:45.736: INFO: stdout: "true"
Sep 18 03:43:45.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8-pt5kb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-412'
Sep 18 03:43:46.853: INFO: stderr: ""
Sep 18 03:43:46.853: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Sep 18 03:43:46.853: INFO: e2e-test-nginx-rc-a2cfa7a1b51dc5e6d7e17a1bed9a64b8-pt5kb is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Sep 18 03:43:46.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-412'
Sep 18 03:43:47.967: INFO: stderr: ""
Sep 18 03:43:47.967: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:43:47.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-412" for this suite.
Sep 18 03:43:54.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:43:54.144: INFO: namespace kubectl-412 deletion completed in 6.168991779s

• [SLOW TEST:31.717 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:43:54.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:43:54.228: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e25c93d5-8160-4dc6-90dd-26d8aa85aa7d" in namespace "projected-2050" to be "success or failure"
Sep 18 03:43:54.254: INFO: Pod "downwardapi-volume-e25c93d5-8160-4dc6-90dd-26d8aa85aa7d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.923349ms
Sep 18 03:43:56.262: INFO: Pod "downwardapi-volume-e25c93d5-8160-4dc6-90dd-26d8aa85aa7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033110663s
Sep 18 03:43:58.269: INFO: Pod "downwardapi-volume-e25c93d5-8160-4dc6-90dd-26d8aa85aa7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040767925s
STEP: Saw pod success
Sep 18 03:43:58.270: INFO: Pod "downwardapi-volume-e25c93d5-8160-4dc6-90dd-26d8aa85aa7d" satisfied condition "success or failure"
Sep 18 03:43:58.274: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e25c93d5-8160-4dc6-90dd-26d8aa85aa7d container client-container: 
STEP: delete the pod
Sep 18 03:43:58.299: INFO: Waiting for pod downwardapi-volume-e25c93d5-8160-4dc6-90dd-26d8aa85aa7d to disappear
Sep 18 03:43:58.303: INFO: Pod downwardapi-volume-e25c93d5-8160-4dc6-90dd-26d8aa85aa7d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:43:58.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2050" for this suite.
Sep 18 03:44:04.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:44:04.468: INFO: namespace projected-2050 deletion completed in 6.15450407s

• [SLOW TEST:10.322 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:44:04.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 18 03:44:04.544: INFO: Waiting up to 5m0s for pod "downwardapi-volume-892bf680-7817-4015-bfba-d0b174fa65af" in namespace "downward-api-8157" to be "success or failure"
Sep 18 03:44:04.554: INFO: Pod "downwardapi-volume-892bf680-7817-4015-bfba-d0b174fa65af": Phase="Pending", Reason="", readiness=false. Elapsed: 9.281256ms
Sep 18 03:44:06.562: INFO: Pod "downwardapi-volume-892bf680-7817-4015-bfba-d0b174fa65af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016831229s
Sep 18 03:44:08.569: INFO: Pod "downwardapi-volume-892bf680-7817-4015-bfba-d0b174fa65af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024788949s
STEP: Saw pod success
Sep 18 03:44:08.570: INFO: Pod "downwardapi-volume-892bf680-7817-4015-bfba-d0b174fa65af" satisfied condition "success or failure"
Sep 18 03:44:08.575: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-892bf680-7817-4015-bfba-d0b174fa65af container client-container: 
STEP: delete the pod
Sep 18 03:44:08.598: INFO: Waiting for pod downwardapi-volume-892bf680-7817-4015-bfba-d0b174fa65af to disappear
Sep 18 03:44:08.601: INFO: Pod downwardapi-volume-892bf680-7817-4015-bfba-d0b174fa65af no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:44:08.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8157" for this suite.
Sep 18 03:44:14.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:44:14.777: INFO: namespace downward-api-8157 deletion completed in 6.169183457s

• [SLOW TEST:10.309 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:44:14.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-9543/secret-test-f6d383a0-73f8-48b7-b022-0bf71b49b547
STEP: Creating a pod to test consume secrets
Sep 18 03:44:14.863: INFO: Waiting up to 5m0s for pod "pod-configmaps-3508c460-52b2-475a-964b-3f8ec88950e6" in namespace "secrets-9543" to be "success or failure"
Sep 18 03:44:14.937: INFO: Pod "pod-configmaps-3508c460-52b2-475a-964b-3f8ec88950e6": Phase="Pending", Reason="", readiness=false. Elapsed: 73.334973ms
Sep 18 03:44:16.945: INFO: Pod "pod-configmaps-3508c460-52b2-475a-964b-3f8ec88950e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081115991s
Sep 18 03:44:18.952: INFO: Pod "pod-configmaps-3508c460-52b2-475a-964b-3f8ec88950e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088262268s
STEP: Saw pod success
Sep 18 03:44:18.952: INFO: Pod "pod-configmaps-3508c460-52b2-475a-964b-3f8ec88950e6" satisfied condition "success or failure"
Sep 18 03:44:18.958: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-3508c460-52b2-475a-964b-3f8ec88950e6 container env-test: 
STEP: delete the pod
Sep 18 03:44:19.083: INFO: Waiting for pod pod-configmaps-3508c460-52b2-475a-964b-3f8ec88950e6 to disappear
Sep 18 03:44:19.092: INFO: Pod pod-configmaps-3508c460-52b2-475a-964b-3f8ec88950e6 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:44:19.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9543" for this suite.
Sep 18 03:44:25.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:44:25.306: INFO: namespace secrets-9543 deletion completed in 6.206413458s

• [SLOW TEST:10.527 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:44:25.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-867
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-867 to expose endpoints map[]
Sep 18 03:44:25.438: INFO: Get endpoints failed (4.557665ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Sep 18 03:44:26.444: INFO: successfully validated that service multi-endpoint-test in namespace services-867 exposes endpoints map[] (1.010869577s elapsed)
STEP: Creating pod pod1 in namespace services-867
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-867 to expose endpoints map[pod1:[100]]
Sep 18 03:44:29.571: INFO: successfully validated that service multi-endpoint-test in namespace services-867 exposes endpoints map[pod1:[100]] (3.116023967s elapsed)
STEP: Creating pod pod2 in namespace services-867
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-867 to expose endpoints map[pod1:[100] pod2:[101]]
Sep 18 03:44:33.702: INFO: successfully validated that service multi-endpoint-test in namespace services-867 exposes endpoints map[pod1:[100] pod2:[101]] (4.114053455s elapsed)
STEP: Deleting pod pod1 in namespace services-867
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-867 to expose endpoints map[pod2:[101]]
Sep 18 03:44:33.740: INFO: successfully validated that service multi-endpoint-test in namespace services-867 exposes endpoints map[pod2:[101]] (30.586058ms elapsed)
STEP: Deleting pod pod2 in namespace services-867
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-867 to expose endpoints map[]
Sep 18 03:44:33.811: INFO: successfully validated that service multi-endpoint-test in namespace services-867 exposes endpoints map[] (65.230201ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:44:34.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-867" for this suite.
Sep 18 03:44:56.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:44:56.239: INFO: namespace services-867 deletion completed in 22.176249312s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:30.932 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:44:56.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-69ea75f2-b451-4568-97d7-9a43baa275ba in namespace container-probe-7074
Sep 18 03:45:00.376: INFO: Started pod liveness-69ea75f2-b451-4568-97d7-9a43baa275ba in namespace container-probe-7074
STEP: checking the pod's current state and verifying that restartCount is present
Sep 18 03:45:00.382: INFO: Initial restart count of pod liveness-69ea75f2-b451-4568-97d7-9a43baa275ba is 0
Sep 18 03:45:18.740: INFO: Restart count of pod container-probe-7074/liveness-69ea75f2-b451-4568-97d7-9a43baa275ba is now 1 (18.357855097s elapsed)
Sep 18 03:45:38.904: INFO: Restart count of pod container-probe-7074/liveness-69ea75f2-b451-4568-97d7-9a43baa275ba is now 2 (38.522586522s elapsed)
Sep 18 03:45:58.973: INFO: Restart count of pod container-probe-7074/liveness-69ea75f2-b451-4568-97d7-9a43baa275ba is now 3 (58.590786311s elapsed)
Sep 18 03:46:17.070: INFO: Restart count of pod container-probe-7074/liveness-69ea75f2-b451-4568-97d7-9a43baa275ba is now 4 (1m16.688531117s elapsed)
Sep 18 03:47:27.316: INFO: Restart count of pod container-probe-7074/liveness-69ea75f2-b451-4568-97d7-9a43baa275ba is now 5 (2m26.933947392s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:47:27.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7074" for this suite.
Sep 18 03:47:33.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:47:33.523: INFO: namespace container-probe-7074 deletion completed in 6.1570455s

• [SLOW TEST:157.282 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:47:33.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-f32cfe64-6422-4b51-b8ac-e742dbb37c83
STEP: Creating a pod to test consume secrets
Sep 18 03:47:33.637: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df0ea80e-aaba-4b0e-9b81-d372c933a243" in namespace "projected-412" to be "success or failure"
Sep 18 03:47:33.659: INFO: Pod "pod-projected-secrets-df0ea80e-aaba-4b0e-9b81-d372c933a243": Phase="Pending", Reason="", readiness=false. Elapsed: 21.919053ms
Sep 18 03:47:35.665: INFO: Pod "pod-projected-secrets-df0ea80e-aaba-4b0e-9b81-d372c933a243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027977788s
Sep 18 03:47:37.671: INFO: Pod "pod-projected-secrets-df0ea80e-aaba-4b0e-9b81-d372c933a243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034105537s
STEP: Saw pod success
Sep 18 03:47:37.672: INFO: Pod "pod-projected-secrets-df0ea80e-aaba-4b0e-9b81-d372c933a243" satisfied condition "success or failure"
Sep 18 03:47:37.677: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-df0ea80e-aaba-4b0e-9b81-d372c933a243 container projected-secret-volume-test: 
STEP: delete the pod
Sep 18 03:47:37.851: INFO: Waiting for pod pod-projected-secrets-df0ea80e-aaba-4b0e-9b81-d372c933a243 to disappear
Sep 18 03:47:37.870: INFO: Pod pod-projected-secrets-df0ea80e-aaba-4b0e-9b81-d372c933a243 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:47:37.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-412" for this suite.
Sep 18 03:47:43.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:47:44.041: INFO: namespace projected-412 deletion completed in 6.160737938s

• [SLOW TEST:10.517 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:47:44.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-17494a22-4466-48cd-b5b0-9d4f49cfe6bc
STEP: Creating configMap with name cm-test-opt-upd-e05b77cc-a5d0-4fb6-b3a9-e32c19daf9f4
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-17494a22-4466-48cd-b5b0-9d4f49cfe6bc
STEP: Updating configmap cm-test-opt-upd-e05b77cc-a5d0-4fb6-b3a9-e32c19daf9f4
STEP: Creating configMap with name cm-test-opt-create-ca4f0c43-144d-4421-aa6f-1a1df685801e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:47:52.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4989" for this suite.
Sep 18 03:48:14.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:48:14.442: INFO: namespace configmap-4989 deletion completed in 22.173483813s

• [SLOW TEST:30.400 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:48:14.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 18 03:48:14.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-146'
Sep 18 03:48:15.644: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 18 03:48:15.644: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Sep 18 03:48:15.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-146'
Sep 18 03:48:16.785: INFO: stderr: ""
Sep 18 03:48:16.785: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:48:16.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-146" for this suite.
Sep 18 03:48:38.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:48:38.989: INFO: namespace kubectl-146 deletion completed in 22.191865841s

• [SLOW TEST:24.545 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:48:38.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 18 03:48:39.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4597'
Sep 18 03:48:40.231: INFO: stderr: ""
Sep 18 03:48:40.231: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Sep 18 03:48:45.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4597 -o json'
Sep 18 03:48:46.375: INFO: stderr: ""
Sep 18 03:48:46.376: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-09-18T03:48:40Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-4597\",\n        \"resourceVersion\": \"802868\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4597/pods/e2e-test-nginx-pod\",\n        \"uid\": \"c0535f25-1164-43dd-aa4a-42ede64d0db1\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-92q6h\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-92q6h\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-92q6h\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-18T03:48:40Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-18T03:48:43Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-18T03:48:43Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-18T03:48:40Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://9ffb26e37621100c830535b540864f53b47abdc3b28e0d6345d2106d2329c0eb\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-09-18T03:48:42Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.7\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.105\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-09-18T03:48:40Z\"\n    }\n}\n"
STEP: replace the image in the pod
Sep 18 03:48:46.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4597'
Sep 18 03:48:47.913: INFO: stderr: ""
Sep 18 03:48:47.913: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Sep 18 03:48:47.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4597'
Sep 18 03:48:51.359: INFO: stderr: ""
Sep 18 03:48:51.359: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:48:51.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4597" for this suite.
Sep 18 03:48:57.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:48:57.554: INFO: namespace kubectl-4597 deletion completed in 6.167145855s

• [SLOW TEST:18.564 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:48:57.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 18 03:48:57.681: INFO: Waiting up to 5m0s for pod "pod-03d0ea1a-fbaf-44d6-ba47-1c51d0d1b977" in namespace "emptydir-2358" to be "success or failure"
Sep 18 03:48:57.720: INFO: Pod "pod-03d0ea1a-fbaf-44d6-ba47-1c51d0d1b977": Phase="Pending", Reason="", readiness=false. Elapsed: 39.11431ms
Sep 18 03:48:59.727: INFO: Pod "pod-03d0ea1a-fbaf-44d6-ba47-1c51d0d1b977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045638808s
Sep 18 03:49:01.733: INFO: Pod "pod-03d0ea1a-fbaf-44d6-ba47-1c51d0d1b977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052364353s
STEP: Saw pod success
Sep 18 03:49:01.734: INFO: Pod "pod-03d0ea1a-fbaf-44d6-ba47-1c51d0d1b977" satisfied condition "success or failure"
Sep 18 03:49:01.738: INFO: Trying to get logs from node iruya-worker pod pod-03d0ea1a-fbaf-44d6-ba47-1c51d0d1b977 container test-container: 
STEP: delete the pod
Sep 18 03:49:01.797: INFO: Waiting for pod pod-03d0ea1a-fbaf-44d6-ba47-1c51d0d1b977 to disappear
Sep 18 03:49:01.812: INFO: Pod pod-03d0ea1a-fbaf-44d6-ba47-1c51d0d1b977 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:49:01.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2358" for this suite.
Sep 18 03:49:07.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:49:08.132: INFO: namespace emptydir-2358 deletion completed in 6.311902323s

• [SLOW TEST:10.573 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:49:08.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 18 03:49:08.202: INFO: Waiting up to 5m0s for pod "pod-50940b4a-ffa8-4d14-b9ed-f8bd9ef6bb5c" in namespace "emptydir-8840" to be "success or failure"
Sep 18 03:49:08.245: INFO: Pod "pod-50940b4a-ffa8-4d14-b9ed-f8bd9ef6bb5c": Phase="Pending", Reason="", readiness=false. Elapsed: 43.50447ms
Sep 18 03:49:10.264: INFO: Pod "pod-50940b4a-ffa8-4d14-b9ed-f8bd9ef6bb5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061967506s
Sep 18 03:49:12.271: INFO: Pod "pod-50940b4a-ffa8-4d14-b9ed-f8bd9ef6bb5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069492245s
STEP: Saw pod success
Sep 18 03:49:12.272: INFO: Pod "pod-50940b4a-ffa8-4d14-b9ed-f8bd9ef6bb5c" satisfied condition "success or failure"
Sep 18 03:49:12.277: INFO: Trying to get logs from node iruya-worker pod pod-50940b4a-ffa8-4d14-b9ed-f8bd9ef6bb5c container test-container: 
STEP: delete the pod
Sep 18 03:49:12.299: INFO: Waiting for pod pod-50940b4a-ffa8-4d14-b9ed-f8bd9ef6bb5c to disappear
Sep 18 03:49:12.304: INFO: Pod pod-50940b4a-ffa8-4d14-b9ed-f8bd9ef6bb5c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:49:12.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8840" for this suite.
Sep 18 03:49:18.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:49:18.466: INFO: namespace emptydir-8840 deletion completed in 6.153761577s

• [SLOW TEST:10.333 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:49:18.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:49:48.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5213" for this suite.
Sep 18 03:49:54.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:49:54.315: INFO: namespace container-runtime-5213 deletion completed in 6.169323864s

• [SLOW TEST:35.848 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:49:54.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Sep 18 03:49:54.414: INFO: Waiting up to 5m0s for pod "client-containers-77165ff1-6648-4031-b1d8-7bafceb7221a" in namespace "containers-2374" to be "success or failure"
Sep 18 03:49:54.419: INFO: Pod "client-containers-77165ff1-6648-4031-b1d8-7bafceb7221a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.22382ms
Sep 18 03:49:56.427: INFO: Pod "client-containers-77165ff1-6648-4031-b1d8-7bafceb7221a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012734895s
Sep 18 03:49:58.434: INFO: Pod "client-containers-77165ff1-6648-4031-b1d8-7bafceb7221a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019650533s
STEP: Saw pod success
Sep 18 03:49:58.434: INFO: Pod "client-containers-77165ff1-6648-4031-b1d8-7bafceb7221a" satisfied condition "success or failure"
Sep 18 03:49:58.438: INFO: Trying to get logs from node iruya-worker pod client-containers-77165ff1-6648-4031-b1d8-7bafceb7221a container test-container: 
STEP: delete the pod
Sep 18 03:49:58.459: INFO: Waiting for pod client-containers-77165ff1-6648-4031-b1d8-7bafceb7221a to disappear
Sep 18 03:49:58.463: INFO: Pod client-containers-77165ff1-6648-4031-b1d8-7bafceb7221a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:49:58.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2374" for this suite.
Sep 18 03:50:04.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:50:04.627: INFO: namespace containers-2374 deletion completed in 6.154002608s

• [SLOW TEST:10.309 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:50:04.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 18 03:50:07.761: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:50:07.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-576" for this suite.
Sep 18 03:50:14.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:50:14.169: INFO: namespace container-runtime-576 deletion completed in 6.173984474s

• [SLOW TEST:9.540 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:50:14.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-9212/configmap-test-ecc54b15-8f18-4118-83f7-f38820c3d110
STEP: Creating a pod to test consume configMaps
Sep 18 03:50:14.246: INFO: Waiting up to 5m0s for pod "pod-configmaps-c5bd8002-dbbb-44ce-9d80-dec68703df5a" in namespace "configmap-9212" to be "success or failure"
Sep 18 03:50:14.261: INFO: Pod "pod-configmaps-c5bd8002-dbbb-44ce-9d80-dec68703df5a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.114071ms
Sep 18 03:50:16.268: INFO: Pod "pod-configmaps-c5bd8002-dbbb-44ce-9d80-dec68703df5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021514763s
Sep 18 03:50:18.276: INFO: Pod "pod-configmaps-c5bd8002-dbbb-44ce-9d80-dec68703df5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029464268s
STEP: Saw pod success
Sep 18 03:50:18.276: INFO: Pod "pod-configmaps-c5bd8002-dbbb-44ce-9d80-dec68703df5a" satisfied condition "success or failure"
Sep 18 03:50:18.281: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-c5bd8002-dbbb-44ce-9d80-dec68703df5a container env-test: 
STEP: delete the pod
Sep 18 03:50:18.302: INFO: Waiting for pod pod-configmaps-c5bd8002-dbbb-44ce-9d80-dec68703df5a to disappear
Sep 18 03:50:18.307: INFO: Pod pod-configmaps-c5bd8002-dbbb-44ce-9d80-dec68703df5a no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:50:18.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9212" for this suite.
Sep 18 03:50:24.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:50:24.518: INFO: namespace configmap-9212 deletion completed in 6.201402383s

• [SLOW TEST:10.348 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:50:24.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Sep 18 03:50:24.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1839'
Sep 18 03:50:26.116: INFO: stderr: ""
Sep 18 03:50:26.116: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 18 03:50:26.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1839'
Sep 18 03:50:27.273: INFO: stderr: ""
Sep 18 03:50:27.273: INFO: stdout: "update-demo-nautilus-bzhcm update-demo-nautilus-np42p "
Sep 18 03:50:27.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzhcm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1839'
Sep 18 03:50:28.372: INFO: stderr: ""
Sep 18 03:50:28.372: INFO: stdout: ""
Sep 18 03:50:28.372: INFO: update-demo-nautilus-bzhcm is created but not running
Sep 18 03:50:33.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1839'
Sep 18 03:50:34.554: INFO: stderr: ""
Sep 18 03:50:34.554: INFO: stdout: "update-demo-nautilus-bzhcm update-demo-nautilus-np42p "
Sep 18 03:50:34.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzhcm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1839'
Sep 18 03:50:35.698: INFO: stderr: ""
Sep 18 03:50:35.699: INFO: stdout: "true"
Sep 18 03:50:35.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzhcm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1839'
Sep 18 03:50:36.832: INFO: stderr: ""
Sep 18 03:50:36.832: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 18 03:50:36.832: INFO: validating pod update-demo-nautilus-bzhcm
Sep 18 03:50:36.839: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 18 03:50:36.839: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 18 03:50:36.839: INFO: update-demo-nautilus-bzhcm is verified up and running
Sep 18 03:50:36.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-np42p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1839'
Sep 18 03:50:37.950: INFO: stderr: ""
Sep 18 03:50:37.950: INFO: stdout: "true"
Sep 18 03:50:37.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-np42p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1839'
Sep 18 03:50:39.099: INFO: stderr: ""
Sep 18 03:50:39.099: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 18 03:50:39.099: INFO: validating pod update-demo-nautilus-np42p
Sep 18 03:50:39.105: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 18 03:50:39.106: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 18 03:50:39.106: INFO: update-demo-nautilus-np42p is verified up and running
STEP: using delete to clean up resources
Sep 18 03:50:39.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1839'
Sep 18 03:50:40.237: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 18 03:50:40.237: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Sep 18 03:50:40.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1839'
Sep 18 03:50:41.394: INFO: stderr: "No resources found.\n"
Sep 18 03:50:41.394: INFO: stdout: ""
Sep 18 03:50:41.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1839 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep 18 03:50:42.522: INFO: stderr: ""
Sep 18 03:50:42.523: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:50:42.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1839" for this suite.
Sep 18 03:50:48.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:50:48.702: INFO: namespace kubectl-1839 deletion completed in 6.157177031s

• [SLOW TEST:24.182 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:50:48.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ed81da21-d16a-476a-8a10-29ef8bc3e213
STEP: Creating a pod to test consume configMaps
Sep 18 03:50:48.814: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5fe9a42-1ae1-46ee-bf42-1c3c1dbd9682" in namespace "configmap-333" to be "success or failure"
Sep 18 03:50:48.832: INFO: Pod "pod-configmaps-e5fe9a42-1ae1-46ee-bf42-1c3c1dbd9682": Phase="Pending", Reason="", readiness=false. Elapsed: 17.971649ms
Sep 18 03:50:50.840: INFO: Pod "pod-configmaps-e5fe9a42-1ae1-46ee-bf42-1c3c1dbd9682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02580152s
Sep 18 03:50:52.847: INFO: Pod "pod-configmaps-e5fe9a42-1ae1-46ee-bf42-1c3c1dbd9682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03254698s
STEP: Saw pod success
Sep 18 03:50:52.847: INFO: Pod "pod-configmaps-e5fe9a42-1ae1-46ee-bf42-1c3c1dbd9682" satisfied condition "success or failure"
Sep 18 03:50:52.852: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e5fe9a42-1ae1-46ee-bf42-1c3c1dbd9682 container configmap-volume-test: 
STEP: delete the pod
Sep 18 03:50:52.873: INFO: Waiting for pod pod-configmaps-e5fe9a42-1ae1-46ee-bf42-1c3c1dbd9682 to disappear
Sep 18 03:50:52.877: INFO: Pod pod-configmaps-e5fe9a42-1ae1-46ee-bf42-1c3c1dbd9682 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:50:52.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-333" for this suite.
Sep 18 03:50:58.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:50:59.077: INFO: namespace configmap-333 deletion completed in 6.191270969s

• [SLOW TEST:10.373 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:50:59.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-49f54524-9a0c-41d3-9d4c-cf3029cf1af3
STEP: Creating a pod to test consume configMaps
Sep 18 03:50:59.160: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2b030e8d-7be6-46a3-b813-336a6f04d1d2" in namespace "projected-5926" to be "success or failure"
Sep 18 03:50:59.217: INFO: Pod "pod-projected-configmaps-2b030e8d-7be6-46a3-b813-336a6f04d1d2": Phase="Pending", Reason="", readiness=false. Elapsed: 56.177105ms
Sep 18 03:51:01.224: INFO: Pod "pod-projected-configmaps-2b030e8d-7be6-46a3-b813-336a6f04d1d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063790108s
Sep 18 03:51:03.233: INFO: Pod "pod-projected-configmaps-2b030e8d-7be6-46a3-b813-336a6f04d1d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07205581s
STEP: Saw pod success
Sep 18 03:51:03.233: INFO: Pod "pod-projected-configmaps-2b030e8d-7be6-46a3-b813-336a6f04d1d2" satisfied condition "success or failure"
Sep 18 03:51:03.238: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-2b030e8d-7be6-46a3-b813-336a6f04d1d2 container projected-configmap-volume-test: 
STEP: delete the pod
Sep 18 03:51:03.274: INFO: Waiting for pod pod-projected-configmaps-2b030e8d-7be6-46a3-b813-336a6f04d1d2 to disappear
Sep 18 03:51:03.305: INFO: Pod pod-projected-configmaps-2b030e8d-7be6-46a3-b813-336a6f04d1d2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:51:03.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5926" for this suite.
Sep 18 03:51:09.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:51:09.487: INFO: namespace projected-5926 deletion completed in 6.17306453s

• [SLOW TEST:10.405 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:51:09.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Sep 18 03:51:09.593: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix610419815/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:51:10.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2440" for this suite.
Sep 18 03:51:16.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:51:16.670: INFO: namespace kubectl-2440 deletion completed in 6.160303283s

• [SLOW TEST:7.180 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:51:16.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 03:51:16.769: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Sep 18 03:51:21.775: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep 18 03:51:23.786: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep 18 03:51:31.925: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8158,SelfLink:/apis/apps/v1/namespaces/deployment-8158/deployments/test-cleanup-deployment,UID:c2cc9ba7-c1fc-4405-9a36-a44196a3411d,ResourceVersion:803564,Generation:1,CreationTimestamp:2020-09-18 03:51:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-18 03:51:23 +0000 UTC 2020-09-18 03:51:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-18 03:51:31 +0000 UTC 2020-09-18 03:51:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Sep 18 03:51:31.932: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8158,SelfLink:/apis/apps/v1/namespaces/deployment-8158/replicasets/test-cleanup-deployment-55bbcbc84c,UID:75b7e5df-6af1-4786-9558-781a15fbbab7,ResourceVersion:803553,Generation:1,CreationTimestamp:2020-09-18 03:51:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c2cc9ba7-c1fc-4405-9a36-a44196a3411d 0x8fc6967 0x8fc6968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep 18 03:51:31.938: INFO: Pod "test-cleanup-deployment-55bbcbc84c-kqf6x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-kqf6x,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8158,SelfLink:/api/v1/namespaces/deployment-8158/pods/test-cleanup-deployment-55bbcbc84c-kqf6x,UID:edae06b8-a4f4-4351-8338-f49738b6b148,ResourceVersion:803551,Generation:0,CreationTimestamp:2020-09-18 03:51:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 75b7e5df-6af1-4786-9558-781a15fbbab7 0x90bfe17 0x90bfe18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bw6fh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bw6fh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-bw6fh true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x90bfe90} {node.kubernetes.io/unreachable Exists  NoExecute 0x90bfeb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:51:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:51:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:51:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-18 03:51:23 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.113,StartTime:2020-09-18 03:51:24 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-18 03:51:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://655c11ffec70b849e3f97e12f4eac55e539f997f527ee878d1f194a5c11819ce}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:51:31.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8158" for this suite.
Sep 18 03:51:39.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:51:40.113: INFO: namespace deployment-8158 deletion completed in 8.167096592s

• [SLOW TEST:23.442 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:51:40.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Sep 18 03:51:46.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-4c526396-1f4f-4127-8dfc-70a91f65c583 -c busybox-main-container --namespace=emptydir-6111 -- cat /usr/share/volumeshare/shareddata.txt'
Sep 18 03:51:47.590: INFO: stderr: "I0918 03:51:47.495967    3531 log.go:172] (0x2956d90) (0x2956e00) Create stream\nI0918 03:51:47.497852    3531 log.go:172] (0x2956d90) (0x2956e00) Stream added, broadcasting: 1\nI0918 03:51:47.512825    3531 log.go:172] (0x2956d90) Reply frame received for 1\nI0918 03:51:47.513469    3531 log.go:172] (0x2956d90) (0x24ae850) Create stream\nI0918 03:51:47.513582    3531 log.go:172] (0x2956d90) (0x24ae850) Stream added, broadcasting: 3\nI0918 03:51:47.515282    3531 log.go:172] (0x2956d90) Reply frame received for 3\nI0918 03:51:47.515638    3531 log.go:172] (0x2956d90) (0x24af180) Create stream\nI0918 03:51:47.515745    3531 log.go:172] (0x2956d90) (0x24af180) Stream added, broadcasting: 5\nI0918 03:51:47.517333    3531 log.go:172] (0x2956d90) Reply frame received for 5\nI0918 03:51:47.576053    3531 log.go:172] (0x2956d90) Data frame received for 3\nI0918 03:51:47.576337    3531 log.go:172] (0x2956d90) Data frame received for 5\nI0918 03:51:47.576497    3531 log.go:172] (0x24ae850) (3) Data frame handling\nI0918 03:51:47.576773    3531 log.go:172] (0x24af180) (5) Data frame handling\nI0918 03:51:47.577531    3531 log.go:172] (0x2956d90) Data frame received for 1\nI0918 03:51:47.577729    3531 log.go:172] (0x2956e00) (1) Data frame handling\nI0918 03:51:47.577972    3531 log.go:172] (0x2956e00) (1) Data frame sent\nI0918 03:51:47.578128    3531 log.go:172] (0x24ae850) (3) Data frame sent\nI0918 03:51:47.578309    3531 log.go:172] (0x2956d90) Data frame received for 3\nI0918 03:51:47.578389    3531 log.go:172] (0x24ae850) (3) Data frame handling\nI0918 03:51:47.579463    3531 log.go:172] (0x2956d90) (0x2956e00) Stream removed, broadcasting: 1\nI0918 03:51:47.581312    3531 log.go:172] (0x2956d90) Go away received\nI0918 03:51:47.583615    3531 log.go:172] (0x2956d90) (0x2956e00) Stream removed, broadcasting: 1\nI0918 03:51:47.583793    3531 log.go:172] (0x2956d90) (0x24ae850) Stream removed, broadcasting: 3\nI0918 03:51:47.583928    3531 log.go:172] (0x2956d90) (0x24af180) Stream removed, broadcasting: 5\n"
Sep 18 03:51:47.591: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:51:47.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6111" for this suite.
Sep 18 03:51:53.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:51:53.761: INFO: namespace emptydir-6111 deletion completed in 6.160432063s

• [SLOW TEST:13.646 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:51:53.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-44678fe2-2a18-4d08-a2d1-434678ca5c9f
STEP: Creating a pod to test consume configMaps
Sep 18 03:51:53.877: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-206a88c2-9cd2-42aa-868b-1abe9013848b" in namespace "projected-7117" to be "success or failure"
Sep 18 03:51:53.942: INFO: Pod "pod-projected-configmaps-206a88c2-9cd2-42aa-868b-1abe9013848b": Phase="Pending", Reason="", readiness=false. Elapsed: 65.485032ms
Sep 18 03:51:55.950: INFO: Pod "pod-projected-configmaps-206a88c2-9cd2-42aa-868b-1abe9013848b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072862583s
Sep 18 03:51:57.958: INFO: Pod "pod-projected-configmaps-206a88c2-9cd2-42aa-868b-1abe9013848b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080752082s
STEP: Saw pod success
Sep 18 03:51:57.958: INFO: Pod "pod-projected-configmaps-206a88c2-9cd2-42aa-868b-1abe9013848b" satisfied condition "success or failure"
Sep 18 03:51:57.963: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-206a88c2-9cd2-42aa-868b-1abe9013848b container projected-configmap-volume-test: 
STEP: delete the pod
Sep 18 03:51:58.207: INFO: Waiting for pod pod-projected-configmaps-206a88c2-9cd2-42aa-868b-1abe9013848b to disappear
Sep 18 03:51:58.214: INFO: Pod pod-projected-configmaps-206a88c2-9cd2-42aa-868b-1abe9013848b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:51:58.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7117" for this suite.
Sep 18 03:52:04.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:52:04.367: INFO: namespace projected-7117 deletion completed in 6.145311248s

• [SLOW TEST:10.605 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:52:04.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1698.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1698.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1698.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1698.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1698.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1698.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1698.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1698.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1698.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1698.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 37.194.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.194.37_udp@PTR;check="$$(dig +tcp +noall +answer +search 37.194.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.194.37_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1698.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1698.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1698.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1698.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1698.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1698.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1698.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1698.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1698.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1698.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1698.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 37.194.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.194.37_udp@PTR;check="$$(dig +tcp +noall +answer +search 37.194.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.194.37_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 18 03:52:10.589: INFO: Unable to read wheezy_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:10.594: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:10.598: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:10.603: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:10.634: INFO: Unable to read jessie_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:10.639: INFO: Unable to read jessie_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:10.644: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:10.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:10.676: INFO: Lookups using dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96 failed for: [wheezy_udp@dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_udp@dns-test-service.dns-1698.svc.cluster.local jessie_tcp@dns-test-service.dns-1698.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local]

Sep 18 03:52:15.684: INFO: Unable to read wheezy_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:15.689: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:15.694: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:15.699: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:15.728: INFO: Unable to read jessie_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:15.732: INFO: Unable to read jessie_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:15.736: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:15.740: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:15.761: INFO: Lookups using dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96 failed for: [wheezy_udp@dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_udp@dns-test-service.dns-1698.svc.cluster.local jessie_tcp@dns-test-service.dns-1698.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local]

Sep 18 03:52:20.684: INFO: Unable to read wheezy_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:20.690: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:20.696: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:20.700: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:20.729: INFO: Unable to read jessie_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:20.733: INFO: Unable to read jessie_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:20.737: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:20.741: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:20.767: INFO: Lookups using dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96 failed for: [wheezy_udp@dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_udp@dns-test-service.dns-1698.svc.cluster.local jessie_tcp@dns-test-service.dns-1698.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local]

Sep 18 03:52:25.682: INFO: Unable to read wheezy_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:25.687: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:25.691: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:25.694: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:25.723: INFO: Unable to read jessie_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:25.729: INFO: Unable to read jessie_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:25.733: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:25.737: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:25.761: INFO: Lookups using dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96 failed for: [wheezy_udp@dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_udp@dns-test-service.dns-1698.svc.cluster.local jessie_tcp@dns-test-service.dns-1698.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local]

Sep 18 03:52:30.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:30.689: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:30.694: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:30.699: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:30.732: INFO: Unable to read jessie_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:30.737: INFO: Unable to read jessie_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:30.744: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:30.748: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:30.771: INFO: Lookups using dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96 failed for: [wheezy_udp@dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_udp@dns-test-service.dns-1698.svc.cluster.local jessie_tcp@dns-test-service.dns-1698.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local]

Sep 18 03:52:35.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:35.687: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:35.692: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:35.697: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:35.729: INFO: Unable to read jessie_udp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:35.732: INFO: Unable to read jessie_tcp@dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:35.736: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:35.740: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local from pod dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96: the server could not find the requested resource (get pods dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96)
Sep 18 03:52:35.764: INFO: Lookups using dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96 failed for: [wheezy_udp@dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@dns-test-service.dns-1698.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_udp@dns-test-service.dns-1698.svc.cluster.local jessie_tcp@dns-test-service.dns-1698.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1698.svc.cluster.local]

Sep 18 03:52:40.767: INFO: DNS probes using dns-1698/dns-test-d8ec66a4-5d20-4bfe-b159-a256dadf9b96 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:52:41.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1698" for this suite.
Sep 18 03:52:47.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:52:47.819: INFO: namespace dns-1698 deletion completed in 6.301712038s

• [SLOW TEST:43.451 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:52:47.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Sep 18 03:52:52.488: INFO: Successfully updated pod "annotationupdatee3441b69-be52-4de8-a2f2-a281c6697a46"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:52:54.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1719" for this suite.
Sep 18 03:53:16.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:53:16.702: INFO: namespace downward-api-1719 deletion completed in 22.166738583s

• [SLOW TEST:28.879 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:53:16.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 18 03:53:16.810: INFO: Waiting up to 5m0s for pod "pod-4a94a020-2683-4467-ba67-b2b8ed57bf13" in namespace "emptydir-1108" to be "success or failure"
Sep 18 03:53:16.838: INFO: Pod "pod-4a94a020-2683-4467-ba67-b2b8ed57bf13": Phase="Pending", Reason="", readiness=false. Elapsed: 27.945388ms
Sep 18 03:53:18.845: INFO: Pod "pod-4a94a020-2683-4467-ba67-b2b8ed57bf13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034234537s
Sep 18 03:53:20.850: INFO: Pod "pod-4a94a020-2683-4467-ba67-b2b8ed57bf13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040090064s
STEP: Saw pod success
Sep 18 03:53:20.851: INFO: Pod "pod-4a94a020-2683-4467-ba67-b2b8ed57bf13" satisfied condition "success or failure"
Sep 18 03:53:20.856: INFO: Trying to get logs from node iruya-worker pod pod-4a94a020-2683-4467-ba67-b2b8ed57bf13 container test-container: 
STEP: delete the pod
Sep 18 03:53:20.898: INFO: Waiting for pod pod-4a94a020-2683-4467-ba67-b2b8ed57bf13 to disappear
Sep 18 03:53:20.917: INFO: Pod pod-4a94a020-2683-4467-ba67-b2b8ed57bf13 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:53:20.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1108" for this suite.
Sep 18 03:53:26.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:53:27.106: INFO: namespace emptydir-1108 deletion completed in 6.179620611s

• [SLOW TEST:10.403 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:53:27.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-c4f5b950-c857-4f41-b2c7-0963ec0a14c7
STEP: Creating a pod to test consume configMaps
Sep 18 03:53:27.247: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-82a872f9-7555-4039-99b3-c8d234bb40ef" in namespace "projected-8024" to be "success or failure"
Sep 18 03:53:27.302: INFO: Pod "pod-projected-configmaps-82a872f9-7555-4039-99b3-c8d234bb40ef": Phase="Pending", Reason="", readiness=false. Elapsed: 55.117512ms
Sep 18 03:53:29.311: INFO: Pod "pod-projected-configmaps-82a872f9-7555-4039-99b3-c8d234bb40ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064100401s
Sep 18 03:53:31.318: INFO: Pod "pod-projected-configmaps-82a872f9-7555-4039-99b3-c8d234bb40ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070923516s
Sep 18 03:53:33.326: INFO: Pod "pod-projected-configmaps-82a872f9-7555-4039-99b3-c8d234bb40ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078892169s
STEP: Saw pod success
Sep 18 03:53:33.327: INFO: Pod "pod-projected-configmaps-82a872f9-7555-4039-99b3-c8d234bb40ef" satisfied condition "success or failure"
Sep 18 03:53:33.336: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-82a872f9-7555-4039-99b3-c8d234bb40ef container projected-configmap-volume-test: 
STEP: delete the pod
Sep 18 03:53:33.360: INFO: Waiting for pod pod-projected-configmaps-82a872f9-7555-4039-99b3-c8d234bb40ef to disappear
Sep 18 03:53:33.384: INFO: Pod pod-projected-configmaps-82a872f9-7555-4039-99b3-c8d234bb40ef no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:53:33.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8024" for this suite.
Sep 18 03:53:39.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:53:39.560: INFO: namespace projected-8024 deletion completed in 6.165303026s

• [SLOW TEST:12.454 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:53:39.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-cad1c1de-f78a-4c89-90d7-19fd382fabdc
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:53:39.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1136" for this suite.
Sep 18 03:53:45.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:53:45.807: INFO: namespace configmap-1136 deletion completed in 6.146930207s

• [SLOW TEST:6.245 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:53:45.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Sep 18 03:53:45.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8233'
Sep 18 03:53:50.951: INFO: stderr: ""
Sep 18 03:53:50.951: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 18 03:53:50.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8233'
Sep 18 03:53:52.091: INFO: stderr: ""
Sep 18 03:53:52.091: INFO: stdout: "update-demo-nautilus-9np2q update-demo-nautilus-g2vz4 "
Sep 18 03:53:52.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9np2q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8233'
Sep 18 03:53:53.224: INFO: stderr: ""
Sep 18 03:53:53.224: INFO: stdout: ""
Sep 18 03:53:53.224: INFO: update-demo-nautilus-9np2q is created but not running
Sep 18 03:53:58.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8233'
Sep 18 03:53:59.368: INFO: stderr: ""
Sep 18 03:53:59.368: INFO: stdout: "update-demo-nautilus-9np2q update-demo-nautilus-g2vz4 "
Sep 18 03:53:59.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9np2q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8233'
Sep 18 03:54:00.507: INFO: stderr: ""
Sep 18 03:54:00.507: INFO: stdout: "true"
Sep 18 03:54:00.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9np2q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8233'
Sep 18 03:54:01.627: INFO: stderr: ""
Sep 18 03:54:01.627: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 18 03:54:01.627: INFO: validating pod update-demo-nautilus-9np2q
Sep 18 03:54:01.633: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 18 03:54:01.633: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 18 03:54:01.633: INFO: update-demo-nautilus-9np2q is verified up and running
Sep 18 03:54:01.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2vz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8233'
Sep 18 03:54:02.796: INFO: stderr: ""
Sep 18 03:54:02.796: INFO: stdout: "true"
Sep 18 03:54:02.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2vz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8233'
Sep 18 03:54:03.910: INFO: stderr: ""
Sep 18 03:54:03.910: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 18 03:54:03.911: INFO: validating pod update-demo-nautilus-g2vz4
Sep 18 03:54:03.916: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 18 03:54:03.917: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 18 03:54:03.917: INFO: update-demo-nautilus-g2vz4 is verified up and running
STEP: rolling-update to new replication controller
Sep 18 03:54:03.926: INFO: scanned /root for discovery docs: 
Sep 18 03:54:03.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8233'
Sep 18 03:54:28.071: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Sep 18 03:54:28.072: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 18 03:54:28.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8233'
Sep 18 03:54:29.249: INFO: stderr: ""
Sep 18 03:54:29.249: INFO: stdout: "update-demo-kitten-4q6j2 update-demo-kitten-b8zzz "
Sep 18 03:54:29.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4q6j2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8233'
Sep 18 03:54:30.334: INFO: stderr: ""
Sep 18 03:54:30.335: INFO: stdout: "true"
Sep 18 03:54:30.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4q6j2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8233'
Sep 18 03:54:31.460: INFO: stderr: ""
Sep 18 03:54:31.460: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Sep 18 03:54:31.460: INFO: validating pod update-demo-kitten-4q6j2
Sep 18 03:54:31.467: INFO: got data: {
  "image": "kitten.jpg"
}

Sep 18 03:54:31.467: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Sep 18 03:54:31.467: INFO: update-demo-kitten-4q6j2 is verified up and running
Sep 18 03:54:31.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b8zzz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8233'
Sep 18 03:54:32.583: INFO: stderr: ""
Sep 18 03:54:32.584: INFO: stdout: "true"
Sep 18 03:54:32.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b8zzz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8233'
Sep 18 03:54:33.708: INFO: stderr: ""
Sep 18 03:54:33.708: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Sep 18 03:54:33.709: INFO: validating pod update-demo-kitten-b8zzz
Sep 18 03:54:33.722: INFO: got data: {
  "image": "kitten.jpg"
}

Sep 18 03:54:33.722: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Sep 18 03:54:33.722: INFO: update-demo-kitten-b8zzz is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:54:33.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8233" for this suite.
Sep 18 03:54:55.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:54:55.912: INFO: namespace kubectl-8233 deletion completed in 22.180559284s

• [SLOW TEST:70.102 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:54:55.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Sep 18 03:54:55.991: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5842" to be "success or failure"
Sep 18 03:54:56.002: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.852667ms
Sep 18 03:54:58.014: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022564507s
Sep 18 03:55:00.021: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029607126s
Sep 18 03:55:02.037: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045536179s
STEP: Saw pod success
Sep 18 03:55:02.038: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Sep 18 03:55:02.042: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Sep 18 03:55:02.064: INFO: Waiting for pod pod-host-path-test to disappear
Sep 18 03:55:02.067: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:55:02.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5842" for this suite.
Sep 18 03:55:08.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:55:08.234: INFO: namespace hostpath-5842 deletion completed in 6.157159142s

• [SLOW TEST:12.318 seconds]
[sig-storage] HostPath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:55:08.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Sep 18 03:55:08.310: INFO: Waiting up to 5m0s for pod "downward-api-30e7091c-66c6-4bcf-8ae7-a0d8b6df2ada" in namespace "downward-api-2746" to be "success or failure"
Sep 18 03:55:08.374: INFO: Pod "downward-api-30e7091c-66c6-4bcf-8ae7-a0d8b6df2ada": Phase="Pending", Reason="", readiness=false. Elapsed: 63.622049ms
Sep 18 03:55:10.382: INFO: Pod "downward-api-30e7091c-66c6-4bcf-8ae7-a0d8b6df2ada": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07096621s
Sep 18 03:55:12.390: INFO: Pod "downward-api-30e7091c-66c6-4bcf-8ae7-a0d8b6df2ada": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078877704s
STEP: Saw pod success
Sep 18 03:55:12.390: INFO: Pod "downward-api-30e7091c-66c6-4bcf-8ae7-a0d8b6df2ada" satisfied condition "success or failure"
Sep 18 03:55:12.394: INFO: Trying to get logs from node iruya-worker2 pod downward-api-30e7091c-66c6-4bcf-8ae7-a0d8b6df2ada container dapi-container: 
STEP: delete the pod
Sep 18 03:55:12.423: INFO: Waiting for pod downward-api-30e7091c-66c6-4bcf-8ae7-a0d8b6df2ada to disappear
Sep 18 03:55:12.426: INFO: Pod downward-api-30e7091c-66c6-4bcf-8ae7-a0d8b6df2ada no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:55:12.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2746" for this suite.
Sep 18 03:55:18.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:55:18.611: INFO: namespace downward-api-2746 deletion completed in 6.176956443s

• [SLOW TEST:10.376 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:55:18.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1677
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1677
STEP: Creating statefulset with conflicting port in namespace statefulset-1677
STEP: Waiting until pod test-pod will start running in namespace statefulset-1677
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1677
Sep 18 03:55:22.845: INFO: Observed stateful pod in namespace: statefulset-1677, name: ss-0, uid: fd9bbc15-f0af-4afa-83ec-0944d37811d4, status phase: Pending. Waiting for statefulset controller to delete.
Sep 18 03:55:23.351: INFO: Observed stateful pod in namespace: statefulset-1677, name: ss-0, uid: fd9bbc15-f0af-4afa-83ec-0944d37811d4, status phase: Failed. Waiting for statefulset controller to delete.
Sep 18 03:55:23.357: INFO: Observed stateful pod in namespace: statefulset-1677, name: ss-0, uid: fd9bbc15-f0af-4afa-83ec-0944d37811d4, status phase: Failed. Waiting for statefulset controller to delete.
Sep 18 03:55:23.389: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1677
STEP: Removing pod with conflicting port in namespace statefulset-1677
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1677 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep 18 03:55:27.490: INFO: Deleting all statefulset in ns statefulset-1677
Sep 18 03:55:27.496: INFO: Scaling statefulset ss to 0
Sep 18 03:55:37.535: INFO: Waiting for statefulset status.replicas updated to 0
Sep 18 03:55:37.540: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:55:37.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1677" for this suite.
Sep 18 03:55:43.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:55:43.731: INFO: namespace statefulset-1677 deletion completed in 6.162260471s

• [SLOW TEST:25.117 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:55:43.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep 18 03:55:51.923: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:55:51.932: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:55:53.932: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:55:53.940: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:55:55.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:55:55.938: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:55:57.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:55:57.940: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:55:59.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:55:59.939: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:56:01.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:56:01.940: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:56:03.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:56:03.940: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:56:05.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:56:05.938: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:56:07.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:56:07.941: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:56:09.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:56:09.940: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:56:11.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:56:11.940: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:56:13.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:56:13.940: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 18 03:56:15.932: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 18 03:56:15.938: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 03:56:15.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1785" for this suite.
Sep 18 03:56:37.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 03:56:38.107: INFO: namespace container-lifecycle-hook-1785 deletion completed in 22.157558395s

• [SLOW TEST:54.375 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 03:56:38.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-b1f50bae-d3be-4acf-8fa8-a7eb15054bb1 in namespace container-probe-987
Sep 18 03:56:42.208: INFO: Started pod test-webserver-b1f50bae-d3be-4acf-8fa8-a7eb15054bb1 in namespace container-probe-987
STEP: checking the pod's current state and verifying that restartCount is present
Sep 18 03:56:42.213: INFO: Initial restart count of pod test-webserver-b1f50bae-d3be-4acf-8fa8-a7eb15054bb1 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 04:00:44.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-987" for this suite.
Sep 18 04:00:50.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 04:00:50.518: INFO: namespace container-probe-987 deletion completed in 6.362785439s

• [SLOW TEST:252.410 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 04:00:50.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Sep 18 04:00:50.610: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-a,UID:4598c8f4-7a01-4005-9e32-ebbaa48408bb,ResourceVersion:805264,Generation:0,CreationTimestamp:2020-09-18 04:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 18 04:00:50.611: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-a,UID:4598c8f4-7a01-4005-9e32-ebbaa48408bb,ResourceVersion:805264,Generation:0,CreationTimestamp:2020-09-18 04:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Sep 18 04:01:00.626: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-a,UID:4598c8f4-7a01-4005-9e32-ebbaa48408bb,ResourceVersion:805284,Generation:0,CreationTimestamp:2020-09-18 04:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Sep 18 04:01:00.627: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-a,UID:4598c8f4-7a01-4005-9e32-ebbaa48408bb,ResourceVersion:805284,Generation:0,CreationTimestamp:2020-09-18 04:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Sep 18 04:01:10.642: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-a,UID:4598c8f4-7a01-4005-9e32-ebbaa48408bb,ResourceVersion:805304,Generation:0,CreationTimestamp:2020-09-18 04:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 18 04:01:10.644: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-a,UID:4598c8f4-7a01-4005-9e32-ebbaa48408bb,ResourceVersion:805304,Generation:0,CreationTimestamp:2020-09-18 04:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Sep 18 04:01:20.655: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-a,UID:4598c8f4-7a01-4005-9e32-ebbaa48408bb,ResourceVersion:805324,Generation:0,CreationTimestamp:2020-09-18 04:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 18 04:01:20.656: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-a,UID:4598c8f4-7a01-4005-9e32-ebbaa48408bb,ResourceVersion:805324,Generation:0,CreationTimestamp:2020-09-18 04:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Sep 18 04:01:30.668: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-b,UID:0d51fb16-ce04-4716-bfa0-903c0ca0181b,ResourceVersion:805344,Generation:0,CreationTimestamp:2020-09-18 04:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 18 04:01:30.669: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-b,UID:0d51fb16-ce04-4716-bfa0-903c0ca0181b,ResourceVersion:805344,Generation:0,CreationTimestamp:2020-09-18 04:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Sep 18 04:01:40.688: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-b,UID:0d51fb16-ce04-4716-bfa0-903c0ca0181b,ResourceVersion:805366,Generation:0,CreationTimestamp:2020-09-18 04:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 18 04:01:40.689: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2102,SelfLink:/api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-configmap-b,UID:0d51fb16-ce04-4716-bfa0-903c0ca0181b,ResourceVersion:805366,Generation:0,CreationTimestamp:2020-09-18 04:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 04:01:50.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2102" for this suite.
Sep 18 04:01:56.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 04:01:56.859: INFO: namespace watch-2102 deletion completed in 6.155971286s

• [SLOW TEST:66.339 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 04:01:56.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-a010e254-e25e-47af-af46-e43ea486407b
STEP: Creating a pod to test consume secrets
Sep 18 04:01:57.057: INFO: Waiting up to 5m0s for pod "pod-secrets-36f20247-f59d-42d1-9e82-037c5b8eac3b" in namespace "secrets-9356" to be "success or failure"
Sep 18 04:01:57.061: INFO: Pod "pod-secrets-36f20247-f59d-42d1-9e82-037c5b8eac3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.978307ms
Sep 18 04:01:59.069: INFO: Pod "pod-secrets-36f20247-f59d-42d1-9e82-037c5b8eac3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011422912s
Sep 18 04:02:01.078: INFO: Pod "pod-secrets-36f20247-f59d-42d1-9e82-037c5b8eac3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020401197s
STEP: Saw pod success
Sep 18 04:02:01.078: INFO: Pod "pod-secrets-36f20247-f59d-42d1-9e82-037c5b8eac3b" satisfied condition "success or failure"
Sep 18 04:02:01.082: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-36f20247-f59d-42d1-9e82-037c5b8eac3b container secret-volume-test: 
STEP: delete the pod
Sep 18 04:02:01.120: INFO: Waiting for pod pod-secrets-36f20247-f59d-42d1-9e82-037c5b8eac3b to disappear
Sep 18 04:02:01.133: INFO: Pod pod-secrets-36f20247-f59d-42d1-9e82-037c5b8eac3b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 04:02:01.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9356" for this suite.
Sep 18 04:02:07.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 04:02:07.289: INFO: namespace secrets-9356 deletion completed in 6.146745182s
STEP: Destroying namespace "secret-namespace-2481" for this suite.
Sep 18 04:02:13.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 04:02:13.456: INFO: namespace secret-namespace-2481 deletion completed in 6.167056682s

• [SLOW TEST:16.593 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 04:02:13.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Sep 18 04:02:13.612: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep 18 04:02:13.634: INFO: Waiting for terminating namespaces to be deleted...
Sep 18 04:02:13.639: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Sep 18 04:02:13.651: INFO: kube-proxy-xbqp2 from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 04:02:13.652: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 18 04:02:13.652: INFO: kindnet-85m7h from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 04:02:13.652: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 18 04:02:13.652: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Sep 18 04:02:13.665: INFO: kube-proxy-v7g67 from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 04:02:13.665: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 18 04:02:13.665: INFO: kindnet-jxh2j from kube-system started at 2020-09-13 16:51:07 +0000 UTC (1 container statuses recorded)
Sep 18 04:02:13.665: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Sep 18 04:02:13.773: INFO: Pod kindnet-85m7h requesting resource cpu=100m on Node iruya-worker
Sep 18 04:02:13.773: INFO: Pod kindnet-jxh2j requesting resource cpu=100m on Node iruya-worker2
Sep 18 04:02:13.773: INFO: Pod kube-proxy-v7g67 requesting resource cpu=0m on Node iruya-worker2
Sep 18 04:02:13.773: INFO: Pod kube-proxy-xbqp2 requesting resource cpu=0m on Node iruya-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6112a144-dbf7-4ec1-97c4-3e1c70c3cf74.1635c4e5c9732e26], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8771/filler-pod-6112a144-dbf7-4ec1-97c4-3e1c70c3cf74 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6112a144-dbf7-4ec1-97c4-3e1c70c3cf74.1635c4e65533878f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6112a144-dbf7-4ec1-97c4-3e1c70c3cf74.1635c4e6a1c6f09b], Reason = [Created], Message = [Created container filler-pod-6112a144-dbf7-4ec1-97c4-3e1c70c3cf74]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6112a144-dbf7-4ec1-97c4-3e1c70c3cf74.1635c4e6b1252392], Reason = [Started], Message = [Started container filler-pod-6112a144-dbf7-4ec1-97c4-3e1c70c3cf74]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d5fac3e2-d76e-48d3-b0b6-f6fc91a743d0.1635c4e5ca3a2c1f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8771/filler-pod-d5fac3e2-d76e-48d3-b0b6-f6fc91a743d0 to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d5fac3e2-d76e-48d3-b0b6-f6fc91a743d0.1635c4e613e000a2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d5fac3e2-d76e-48d3-b0b6-f6fc91a743d0.1635c4e66f4c1046], Reason = [Created], Message = [Created container filler-pod-d5fac3e2-d76e-48d3-b0b6-f6fc91a743d0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d5fac3e2-d76e-48d3-b0b6-f6fc91a743d0.1635c4e68772ea29], Reason = [Started], Message = [Started container filler-pod-d5fac3e2-d76e-48d3-b0b6-f6fc91a743d0]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1635c4e6ba91cad3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 04:02:18.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8771" for this suite.
Sep 18 04:02:26.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 04:02:27.649: INFO: namespace sched-pred-8771 deletion completed in 8.677870153s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:14.193 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 04:02:27.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Sep 18 04:02:27.712: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 04:02:34.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9099" for this suite.
Sep 18 04:02:40.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 04:02:40.604: INFO: namespace init-container-9099 deletion completed in 6.155959772s

• [SLOW TEST:12.954 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 18 04:02:40.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 18 04:02:40.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 18 04:02:44.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6627" for this suite.
Sep 18 04:03:24.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 18 04:03:25.065: INFO: namespace pods-6627 deletion completed in 40.170713748s

• [SLOW TEST:44.460 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSep 18 04:03:25.072: INFO: Running AfterSuite actions on all nodes
Sep 18 04:03:25.072: INFO: Running AfterSuite actions on node 1
Sep 18 04:03:25.073: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6142.436 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS